WorldWideScience

Sample records for model input velocity

  1. The SCEC Unified Community Velocity Model (UCVM) Software Framework for Distributing and Querying Seismic Velocity Models

    Science.gov (United States)

    Maechling, P. J.; Taborda, R.; Callaghan, S.; Shaw, J. H.; Plesch, A.; Olsen, K. B.; Jordan, T. H.; Goulet, C. A.

    2017-12-01

    Crustal seismic velocity models and datasets play a key role in regional three-dimensional numerical earthquake ground-motion simulation, full waveform tomography, modern physics-based probabilistic earthquake hazard analysis, as well as in other related fields including geophysics, seismology, and earthquake engineering. The standard material properties provided by a seismic velocity model are P- and S-wave velocities and density for any arbitrary point within the geographic volume for which the model is defined. Many seismic velocity models and datasets are constructed by synthesizing information from multiple sources and the resulting models are delivered to users in multiple file formats, such as text files, binary files, HDF-5 files, structured and unstructured grids, and through computer applications that allow for interactive querying of material properties. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) software framework to facilitate the registration and distribution of existing and future seismic velocity models to the SCEC community. The UCVM software framework is designed to provide a standard query interface to multiple, alternative velocity models, even if the underlying velocity models are defined in different formats or use different geographic projections. The UCVM framework provides a comprehensive set of open-source tools for querying seismic velocity model properties, combining regional 3D models and 1D background models, visualizing 3D models, and generating computational models in the form of regular grids or unstructured meshes that can be used as inputs for ground-motion simulations. The UCVM framework helps researchers compare seismic velocity models and build equivalent simulation meshes from alternative velocity models. These capabilities enable researchers to evaluate the impact of alternative velocity models in ground-motion simulations and seismic hazard analysis applications

  2. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  3. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  4. Remote sensing inputs to water demand modeling

    Science.gov (United States)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  5. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Hydrogen Generation Rate Model Calculation Input Data

    International Nuclear Information System (INIS)

    KUFAHL, M.A.

    2000-01-01

    This report documents the procedures and techniques utilized in the collection and analysis of analyte input data values in support of the flammable gas hazard safety analyses. This document represents the analyses of data current at the time of its writing and does not account for data available since then

  7. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  8. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  9. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  10. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  11. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  12. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  13. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  14. Stein's neuronal model with pooled renewal input

    Czech Academy of Sciences Publication Activity Database

    Rajdl, K.; Lánský, Petr

    2015-01-01

    Roč. 109, č. 3 (2015), s. 389-399 ISSN 0340-1200 Institutional support: RVO:67985823 Keywords : Stein’s model * Poisson process * pooled renewal processes * first-passage time Subject RIV: BA - General Mathematics Impact factor: 1.611, year: 2015

  15. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  16. Calibration of controlling input models for pavement management system.

    Science.gov (United States)

    2013-07-01

    The Oklahoma Department of Transportation (ODOT) is currently using the Deighton Total Infrastructure Management System (dTIMS) software for pavement management. This system is based on several input models which are computational backbones to dev...

  17. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-01-01

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN

  18. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  19. A new car-following model considering velocity anticipation

    International Nuclear Information System (INIS)

    Jun-Fang, Tian; Bin, Jia; Xin-Gang, Li; Zi-You, Gao

    2010-01-01

    The full velocity difference model proposed by Jiang et al. [2001 Phys. Rev. E 64 017101] has been improved by introducing velocity anticipation. Velocity anticipation means the follower estimates the future velocity of the leader. The stability condition of the new model is obtained by using the linear stability theory. Theoretical results show that the stability region increases when we increase the anticipation time interval. The mKdV equation is derived to describe the kink–antikink soliton wave and obtain the coexisting stability line. The delay time of car motion and kinematic wave speed at jam density are obtained in this model. Numerical simulations exhibit that when we increase the anticipation time interval enough, the new model could avoid accidents under urgent braking cases. Also, the traffic jam could be suppressed by considering the anticipation velocity. All results demonstrate that this model is an improvement on the full velocity difference model. (general)

  20. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  1. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  2. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  3. Evaluating nuclear physics inputs in core-collapse supernova models

    Science.gov (United States)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  4. Exploring the links between transient water inputs and glacier velocity in a small temperate glacier in southeastern Alaska

    Science.gov (United States)

    Heavner, M.; Habermann, M.; Hood, E. W.; Fatland, D. R.

    2009-12-01

    Glaciers along the Gulf of Alaska are thinning and retreating rapidly. An important control on the rate at which ice is being lost is basal motion because higher glacier velocities increase the rate at which ice is delivered to ablation zones. Recent research has focused on understanding the effects of sub-glacial water storage on glacier basal motion. In this study, we examined two seasons of the effect of hydrologic controls (from large rainfall events as well as a glacier lake outburst floods) on the velocity of the Lemon Creek Glacier in southeastern Alaska. Lemon Creek Glacier is a moderately sized (~16~km2) temperate glacier at the margin of the Juneau Icefield. An ice-marginal lake forms at the head of the glacier and catastrophically drains once or twice every melt season. We have instrumented the glacier with two meteorological stations: one at the head of the glacier near the ice-marginal lake and another several kilometers below the terminus. These stations measure temperature, relative humidity, precipitation, incoming solar radiation and wind speed and direction. Lake stage in the ice-marginal lake was monitored with a pressure transducer. In addition, Lemon Creek was instrumented with a water quality sonde at the location of a US Geological Survey gaging station approximately 3 km downstream from the glacier terminus. The sonde provides continuous measurements of water temperature, dissolved oxygen, turbidity and conductivity. Finally, multiple Trimble NetRS dual frequency, differential GPS units were deployed on the glacier along the centerline of the glacier. All of the instruments were run continuously from May-September 2008 and May-September 2009 and captured threee outburst floods associated with the ice-marginal lake drainage as well as several large (>3~cm) rainfall events associated with frontal storms off of the Gulf of Alaska in late summer. Taken together, these data allow us to test the hypothesis that water inputs which overwhelm

  5. The Limit Deposit Velocity model, a new approach

    Directory of Open Access Journals (Sweden)

    Miedema Sape A.

    2015-12-01

    Full Text Available In slurry transport of settling slurries in Newtonian fluids, it is often stated that one should apply a line speed above a critical velocity, because blow this critical velocity there is the danger of plugging the line. There are many definitions and names for this critical velocity. It is referred to as the velocity where a bed starts sliding or the velocity above which there is no stationary bed or sliding bed. Others use the velocity where the hydraulic gradient is at a minimum, because of the minimum energy consumption. Most models from literature are one term one equation models, based on the idea that the critical velocity can be explained that way.

  6. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  7. The use of synthetic input sequences in time series modeling

    International Nuclear Information System (INIS)

    Oliveira, Dair Jose de; Letellier, Christophe; Gomes, Murilo E.D.; Aguirre, Luis A.

    2008-01-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure

  8. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  9. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  10. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  11. Development of vortex model with realistic axial velocity distribution

    International Nuclear Information System (INIS)

    Ito, Kei; Ezure, Toshiki; Ohshima, Hiroyuki

    2014-01-01

    A vortex is considered as one of significant phenomena which may cause gas entrainment (GE) and/or vortex cavitation in sodium-cooled fast reactors. In our past studies, the vortex is assumed to be approximated by the well-known Burgers vortex model. However, the Burgers vortex model has a simple but unreal assumption that the axial velocity component is horizontally constant, while in real the free surface vortex has the axial velocity distribution which shows large gradient in radial direction near the vortex center. In this study, a new vortex model with realistic axial velocity distribution is proposed. This model is derived from the steady axisymmetric Navier-Stokes equation as well as the Burgers vortex model, but the realistic axial velocity distribution in radial direction is considered, which is defined to be zero at the vortex center and to approach asymptotically to zero at infinity. As the verification, the new vortex model is applied to the evaluation of a simple vortex experiment, and shows good agreements with the experimental data in terms of the circumferential velocity distribution and the free surface shape. In addition, it is confirmed that the Burgers vortex model fails to calculate accurate velocity distribution with the assumption of uniform axial velocity. However, the calculation accuracy of the Burgers vortex model can be enhanced close to that of the new vortex model in consideration of the effective axial velocity which is calculated as the average value only in the vicinity of the vortex center. (author)

  12. GASFLOW computer code (physical models and input data)

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2007-11-01

    The GASFLOW computer code was developed jointly by the Los Alamos National Laboratory, USA, and Forschungszentrum Karlsruhe, Germany. The code is primarily intended for calculations of the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and in other facilities. The physical models and the input data are described, and a commented simple calculation is presented

  13. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  14. Key processes and input parameters for environmental tritium models

    International Nuclear Information System (INIS)

    Bunnenberg, C.; Taschner, M.; Ogram, G.L.

    1994-01-01

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs

  15. Key processes and input parameters for environmental tritium models

    Energy Technology Data Exchange (ETDEWEB)

    Bunnenberg, C; Taschner, M [Niedersaechsisches Inst. fuer Radiooekologie, Hannover (Germany); Ogram, G L [Ontario Hydro, Toronto, ON (Canada)

    1994-12-31

    The primary objective of the work reported here is to define key processes and input parameters for mathematical models of environmental tritium behaviour adequate for use in safety analysis and licensing of fusion devices like NET and associated tritium handling facilities. (author). 45 refs., 3 figs.

  16. Handwriting Velocity Modeling by Artificial Neural Networks

    OpenAIRE

    Mohamed Aymen Slim; Afef Abdelkrim; Mohamed Benrejeb

    2014-01-01

    The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, wr...

  17. An Extended Optimal Velocity Model with Consideration of Honk Effect

    International Nuclear Information System (INIS)

    Tang Tieqiao; Li Chuanyao; Huang Haijun; Shang Huayan

    2010-01-01

    Based on the OV (optimal velocity) model, we in this paper present an extended OV model with the consideration of the honk effect. The analytical and numerical results illustrate that the honk effect can improve the velocity and flow of uniform flow but that the increments are relevant to the density. (interdisciplinary physics and related areas of science and technology)

  18. A classical model explaining the OPERA velocity paradox

    CERN Document Server

    Broda, Boguslaw

    2011-01-01

    In the context of the paradoxical results of the OPERA Collaboration, we have proposed a classical mechanics model yielding the statistically measured velocity of a beam higher than the velocity of the particles constituting the beam. Ingredients of our model necessary to obtain this curious result are a non-constant fraction function and the method of the maximum-likelihood estimation.

  19. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  20. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  1. A PRODUCTIVITY EVALUATION MODEL BASED ON INPUT AND OUTPUT ORIENTATIONS

    Directory of Open Access Journals (Sweden)

    C.O. Anyaeche

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Many productivity models evaluate either the input or the output performances using standalone techniques. This sometimes gives divergent views of the same system’s results. The work reported in this article, which simultaneously evaluated productivity from both orientations, was applied on real life data. The results showed losses in productivity (–2% and price recovery (–8% for the outputs; the inputs showed productivity gain (145% but price recovery loss (–63%. These imply losses in product performances but a productivity gain in inputs. The loss in the price recovery of inputs indicates a problem in the pricing policy. This model is applicable in product diversification.

    AFRIKAANSE OPSOMMING: Die meeste produktiwiteitsmodelle evalueer of die inset- of die uitsetverrigting deur gebruik te maak van geïsoleerde tegnieke. Dit lei soms tot uiteenlopende perspektiewe van dieselfde sisteem se verrigting. Hierdie artikel evalueer verrigting uit beide perspektiewe en gebruik ware data. Die resultate toon ‘n afname in produktiwiteit (-2% en prysherwinning (-8% vir die uitsette. Die insette toon ‘n toename in produktiwiteit (145%, maar ‘n afname in prysherwinning (-63%. Dit impliseer ‘n afname in produkverrigting, maar ‘n produktiwiteitstoename in insette. Die afname in die prysherwinning van insette dui op ‘n problem in die prysvasstellingbeleid. Hierdie model is geskik vir produkdiversifikasie.

  2. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  3. Evaluation of a Model for Predicting the Tidal Velocity in Fjord Entrances

    Energy Technology Data Exchange (ETDEWEB)

    Lalander, Emilia [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden); Thomassen, Paul [Team Ashes, Trondheim (Norway); Leijon, Mats [The Swedish Centre for Renewable Electric Energy Conversion, Division of Electricity, Uppsala Univ. (Sweden)

    2013-04-15

    Sufficiently accurate and low-cost estimation of tidal velocities is of importance when evaluating a potential site for a tidal energy farm. Here we suggest and evaluate a model to calculate the tidal velocity in fjord entrances. The model is compared with tidal velocities from Acoustic Doppler Current Profiler (ADCP) measurements in the tidal channel Skarpsundet in Norway. The calculated velocity value from the model corresponded well with the measured cross-sectional average velocity, but was shown to underestimate the velocity in the centre of the channel. The effect of this was quantified by calculating the kinetic energy of the flow for a 14-day period. A numerical simulation using TELEMAC-2D was performed and validated with ADCP measurements. Velocity data from the simulation was used as input for calculating the kinetic energy at various locations in the channel. It was concluded that the model presented here is not accurate enough for assessing the tidal energy resource. However, the simplicity of the model was considered promising in the use of finding sites where further analyses can be made.

  4. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  5. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  6. Uncertainty assessment of 3D instantaneous velocity model from stack velocities

    Science.gov (United States)

    Emanuele Maesano, Francesco; D'Ambrogi, Chiara

    2015-04-01

    3D modelling is a powerful tool that is experiencing increasing applications in data analysis and dissemination. At the same time the need of quantitative uncertainty evaluation is strongly requested in many aspects of the geological sciences and by the stakeholders. In many cases the starting point for 3D model building is the interpretation of seismic profiles that provide indirect information about the geology of the subsurface in the domain of time. The most problematic step in the 3D modelling construction is the conversion of the horizons and faults interpreted in time domain to the depth domain. In this step the dominant variable that could lead to significantly different results is the velocity. The knowledge of the subsurface velocities is related mainly to punctual data (sonic logs) that are often sparsely distributed in the areas covered by the seismic interpretation. The extrapolation of velocity information to wide extended horizons is thus a critical step to obtain a 3D model in depth that can be used for predictive purpose. In the EU-funded GeoMol Project, the availability of a dense network of seismic lines (confidentially provided by ENI S.p.A.) in the Central Po Plain, is paired with the presence of 136 well logs, but few of them have sonic logs and in some portion of the area the wells are very widely spaced. The depth conversion of the 3D model in time domain has been performed testing different strategies for the use and the interpolation of velocity data. The final model has been obtained using a 4 layer cake 3D instantaneous velocity model that considers both the initial velocity (v0) in every reference horizon and the gradient of velocity variation with depth (k). Using this method it is possible to consider the geological constraint given by the geometries of the horizons and the geo-statistical approach to the interpolation of velocities and gradient. Here we present an experiment based on the use of set of pseudo-wells obtained from the

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  8. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  9. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  10. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  11. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  12. Assessment of effectiveness of geologic isolation systems: geostatistical modeling of pore velocity

    International Nuclear Information System (INIS)

    Devary, J.L.; Doctor, P.G.

    1981-06-01

    A significant part of evaluating a geologic formation as a nuclear waste repository involves the modeling of contaminant transport in the surrounding media in the event the repository is breached. The commonly used contaminant transport models are deterministic. However, the spatial variability of hydrologic field parameters introduces uncertainties into contaminant transport predictions. This paper discusses the application of geostatistical techniques to the modeling of spatially varying hydrologic field parameters required as input to contaminant transport analyses. Kriging estimation techniques were applied to Hanford Reservation field data to calculate hydraulic conductivity and the ground-water potential gradients. These quantities were statistically combined to estimate the groundwater pore velocity and to characterize the pore velocity estimation error. Combining geostatistical modeling techniques with product error propagation techniques results in an effective stochastic characterization of groundwater pore velocity, a hydrologic parameter required for contaminant transport analyses

  13. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  14. Simplifying BRDF input data for optical signature modeling

    Science.gov (United States)

    Hallberg, Tomas; Pohl, Anna; Fagerström, Jan

    2017-05-01

    Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.

  15. A phenomenological retention tank model using settling velocity distributions.

    Science.gov (United States)

    Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P

    2012-12-15

    Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Flood Water Crossing: Laboratory Model Investigations for Water Velocity Reductions

    Directory of Open Access Journals (Sweden)

    Kasnon N.

    2014-01-01

    Full Text Available The occurrence of floods may give a negative impact towards road traffic in terms of difficulties in mobilizing traffic as well as causing damage to the vehicles, which later cause them to be stuck in the traffic and trigger traffic problems. The high velocity of water flows occur when there is no existence of objects capable of diffusing the water velocity on the road surface. The shape, orientation and size of the object to be placed beside the road as a diffuser are important for the effective flow attenuation of water. In order to investigate the water flow, a laboratory experiment was set up and models were constructed to study the flow velocity reduction. The velocity of water before and after passing through the diffuser objects was investigated. This paper focuses on laboratory experiments to determine the flow velocity of the water using sensors before and after passing through two best diffuser objects chosen from a previous flow pattern experiment.

  17. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  18. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  19. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  20. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  1. Influential input parameters for reflood model of MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-10-15

    Best Estimate (BE) calculation has been more broadly used in nuclear industries and regulations to reduce the significant conservatism for evaluating Loss of Coolant Accident (LOCA). Reflood model has been identified as one of the problems in BE calculation. The objective of the Post BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) program of OECD/NEA is to make progress the issue of the quantification of the uncertainty of the physical models in system thermal hydraulic codes, by considering an experimental result especially for reflood. It is important to establish a methodology to identify and select the parameters influential to the response of reflood phenomena following Large Break LOCA. For this aspect, a reference calculation and sensitivity analysis to select the dominant influential parameters for FEBA experiment are performed.

  2. Comprehensive Information Retrieval and Model Input Sequence (CIRMIS)

    International Nuclear Information System (INIS)

    Friedrichs, D.R.

    1977-04-01

    The Comprehensive Information Retrieval and Model Input Sequence (CIRMIS) was developed to provide the research scientist with man--machine interactive capabilities in a real-time environment, and thereby produce results more quickly and efficiently. The CIRMIS system was originally developed to increase data storage and retrieval capabilities and ground-water model control for the Hanford site. The overall configuration, however, can be used in other areas. The CIRMIS system provides the user with three major functions: retrieval of well-based data, special application for manipulating surface data or background maps, and the manipulation and control of ground-water models. These programs comprise only a portion of the entire CIRMIS system. A complete description of the CIRMIS system is given in this report. 25 figures, 7 tables

  3. Car Deceleration Considering Its Own Velocity in Cellular Automata Model

    International Nuclear Information System (INIS)

    Li Keping

    2006-01-01

    In this paper, we propose a new cellular automaton model, which is based on NaSch traffic model. In our method, when a car has a larger velocity, if the gap between the car and its leading car is not enough large, it will decrease. The aim is that the following car has a buffer space to decrease its velocity at the next time, and then avoid to decelerate too high. The simulation results show that using our model, the car deceleration is realistic, and is closer to the field measure than that of NaSch model.

  4. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  5. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  6. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  7. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  8. Three dimensional reflection velocity analysis based on velocity model scan; Model scan ni yoru sanjigen hanshaha sokudo kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Minegishi, M; Tsuru, T [Japan National Oil Corp., Tokyo (Japan); Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1996-05-01

    Introduced herein is a reflection wave velocity analysis method using model scanning as a method for velocity estimation across a section, the estimation being useful in the construction of a velocity structure model in seismic exploration. In this method, a stripping type analysis is carried out, wherein optimum structure parameters are determined for reflection waves one after the other beginning with those from shallower parts. During this process, the velocity structures previously determined for the shallower parts are fixed and only the lowest of the layers undergoing analysis at the time is subjected to model scanning. To consider the bending of ray paths at each velocity boundaries involving shallower parts, the ray path tracing method is utilized for the calculation of the reflection travel time curve for the reflection surface being analyzed. Out of the reflection wave travel time curves calculated using various velocity structure models, one that suits best the actual reflection travel time is detected. The degree of matching between the calculated result and actual result is measured by use of data semblance in a time window provided centering about the calculated reflective wave travel time. The structure parameter is estimated on the basis of conditions for the maximum semblance. 1 ref., 4 figs.

  9. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  10. Welding wire velocity modelling and control using an optical sensor

    DEFF Research Database (Denmark)

    Nielsen, Kirsten M.; Pedersen, Tom S.

    2007-01-01

    In this paper a method for controlling the velocity of a welding wire at the tip of the handle is described. The method is an alternative to the traditional welding apparatus control system where the wire velocity is controlled internal in the welding machine implying a poor disturbance reduction....... To obtain the tip velocity a dynamic model of the wire/liner system is developed and verified.  In the wire/liner system it turned out that backlash and reflections are influential factors. An idea for handling the backlash has been suggested. In addition an optical sensor for measuring the wire velocity...... at the tip has been constructed. The optical sensor may be used but some problems due to focusing cause noise in the control loop demanding a more precise mechanical wire feed system or an optical sensor with better focusing characteristics....

  11. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  12. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  13. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  14. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  15. A generic model for the shallow velocity structure of volcanoes

    Science.gov (United States)

    Lesage, Philippe; Heap, Michael J.; Kushnir, Alexandra

    2018-05-01

    The knowledge of the structure of volcanoes and of the physical properties of volcanic rocks is of paramount importance to the understanding of volcanic processes and the interpretation of monitoring observations. However, the determination of these structures by geophysical methods suffers limitations including a lack of resolution and poor precision. Laboratory experiments provide complementary information on the physical properties of volcanic materials and their behavior as a function of several parameters including pressure and temperature. Nevertheless combined studies and comparisons of field-based geophysical and laboratory-based physical approaches remain scant in the literature. Here, we present a meta-analysis which compares 44 seismic velocity models of the shallow structure of eleven volcanoes, laboratory velocity measurements on about one hundred rock samples from five volcanoes, and seismic well-logs from deep boreholes at two volcanoes. The comparison of these measurements confirms the strong variability of P- and S-wave velocities, which reflects the diversity of volcanic materials. The values obtained from laboratory experiments are systematically larger than those provided by seismic models. This discrepancy mainly results from scaling problems due to the difference between the sampled volumes. The averages of the seismic models are characterized by very low velocities at the surface and a strong velocity increase at shallow depth. By adjusting analytical functions to these averages, we define a generic model that can describe the variations in P- and S-wave velocities in the first 500 m of andesitic and basaltic volcanoes. This model can be used for volcanoes where no structural information is available. The model can also account for site time correction in hypocenter determination as well as for site and path effects that are commonly observed in volcanic structures.

  16. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  17. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  18. Prioritizing Interdependent Production Processes using Leontief Input-Output Model

    Directory of Open Access Journals (Sweden)

    Masbad Jesah Grace

    2016-03-01

    Full Text Available This paper proposes a methodology in identifying key production processes in an interdependent production system. Previous approaches on this domain have drawbacks that may potentially affect the reliability of decision-making. The proposed approach adopts the Leontief input-output model (L-IOM which was proven successful in analyzing interdependent economic systems. The motivation behind such adoption lies in the strength of L-IOM in providing a rigorous quantitative framework in identifying key components of interdependent systems. In this proposed approach, the consumption and production flows of each process are represented respectively by the material inventory produced by the prior process and the material inventory produced by the current process, both in monetary values. A case study in a furniture production system located in central Philippines was carried out to elucidate the proposed approach. Results of the case were reported in this work

  19. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  20. An analytical model for an input/output-subsystem

    International Nuclear Information System (INIS)

    Roemgens, J.

    1983-05-01

    An input/output-subsystem of one or several computers if formed by the external memory units and the peripheral units of a computer system. For these subsystems mathematical models are established, taking into account the special properties of the I/O-subsystems, in order to avoid planning errors and to allow for predictions of the capacity of such systems. Here an analytical model is presented for the magnetic discs of a I/O-subsystem, using analytical methods for the individual waiting queues or waiting queue networks. Only I/O-subsystems of IBM-computer configurations are considered, which can be controlled by the MVS operating system. After a description of the hardware and software components of these I/O-systems, possible solutions from the literature are presented and discussed with respect to their applicability in IBM-I/O-subsystems. Based on these models a special scheme is developed which combines the advantages of the literature models and avoids the disadvantages in part. (orig./RW) [de

  1. Delayed hydride cracking: theoretical model testing to predict cracking velocity

    International Nuclear Information System (INIS)

    Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys

    2009-01-01

    Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)

  2. Shallow and deep crustal velocity models of Northeast Tibet

    Science.gov (United States)

    Karplus, M.; Klemperer, S. L.; Mechie, J.; Shi, D.; Zhao, W.; Brown, L. D.; Wu, Z.

    2009-12-01

    The INDEPTH IV seismic profile in Northeast Tibet is the highest resolution wide-angle refraction experiment imaging the Qaidam Basin, North Kunlun Thrusts (NKT), Kunlun Mountains, North and South Kunlun Faults (NKT, SKT), and Songpan-Ganzi terrane (SG). First arrival refraction modeling using ray tracing and least squares inversion has yielded a crustal p-wave velocity model, best resolved for the top 20 km. Ray tracing of deeper reflections shows considerable differences between the Qaidam Basin and the SG, in agreement with previous studies of those areas. The Moho ranges from about 52 km beneath the Qaidam Basin to 63 km with a slight northward dip beneath the SG. The 11-km change must occur between the SKF and the southern edge of the Qaidam Basin, just north of the NKT, allowing the possibility of a Moho step across the NKT. The Qaidam Basin velocity-versus-depth profile is more similar to the global average than the SG profile, which bears resemblance to previously determined “Tibet-type” velocity profiles with mid to lower crustal velocities of 6.5 to 7.0 km/s appearing at greater depths. The highest resolution portion of the profile (100-m instrument spacing) features two distinct, apparently south-dipping low-velocity zones reaching about 2-3 km depth that we infer to be the locations of the NKF and SKF. A strong reflector at 35 km, located entirely south of the SKF and truncated just south of it, may be cut by a steeply south-dipping SKF. Elevated velocities at depth beneath the surface location of the NKF may indicate the south-dipping NKF meets the SKF between depths of 5 and 10 km. Undulating regions of high and low velocity extending about 1-2 km in depth near the southern border of the Qaidam Basin likely represent north-verging thrust sheets of the NKT.

  3. A nonlinear inversion for the velocity background and perturbation models

    KAUST Repository

    Wu, Zedong

    2015-08-19

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the single scattered wavefield obtained using an image. However, current RWI methods usually neglect diving waves, which is an important source of information for extracting the long wavelength components of the velocity model. Thus, we propose a new optimization problem through breaking the velocity model into the background and the perturbation in the wave equation directly. In this case, the perturbed model is no longer the single scattering model, but includes all scattering. We optimize both components simultaneously, and thus, the objective function is nonlinear with respect to both the background and perturbation. The new introduced w can absorb the non-smooth update of background naturally. Application to the Marmousi model with frequencies that start at 5 Hz shows that this method can converge to the accurate velocity starting from a linearly increasing initial velocity. Application to the SEG2014 demonstrates the versatility of the approach.

  4. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  5. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  6. Modeling and Velocity Tracking Control for Tape Drive System ...

    African Journals Online (AJOL)

    Modeling and Velocity Tracking Control for Tape Drive System. ... Journal of Applied Sciences and Environmental Management ... The result of the study revealed that 7.07, 8 and 10 of koln values met the design goal and also resulted in optimal control performance with the following characteristics 7.31%,7.71% , 9.41% ...

  7. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  8. A new settling velocity model to describe secondary sedimentation.

    Science.gov (United States)

    Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-12-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. UCVM: An Open Source Framework for 3D Velocity Model Research

    Science.gov (United States)

    Gill, D.; Maechling, P. J.; Jordan, T. H.; Plesch, A.; Taborda, R.; Callaghan, S.; Small, P.

    2013-12-01

    Three-dimensional (3D) seismic velocity models provide fundamental input data to ground motion simulations, in the form of structured or unstructured meshes or grids. Numerous models are available for California, as well as for other parts of the United States and Europe, but models do not share a common interface. Being able to interact with these models in a standardized way is critical in order to configure and run 3D ground motion simulations. The Unified Community Velocity Model (UCVM) software, developed by researchers at the Southern California Earthquake Center (SCEC), is an open source framework designed to provide a cohesive way to interact with seismic velocity models. We describe the several ways in which we have improved the UCVM software over the last year. We have simplified the UCVM installation process by automating the installation of various community codebases, improving the ease of use.. We discuss how UCVM software was used to build velocity meshes for high-frequency (4Hz) deterministic 3D wave propagation simulations, and how the UCVM framework interacts with other open source resources, such as NetCDF file formats for visualization. The UCVM software uses a layered software architecture that transparently converts geographic coordinates to the coordinate systems used by the underlying velocity models and supports inclusion of a configurable near-surface geotechnical layer, while interacting with the velocity model codes through their existing software interfaces. No changes to the velocity model codes are required. Our recent UCVM installation improvements bundle UCVM with a setup script, written in Python, which guides users through the process that installs the UCVM software along with all the user-selectable velocity models. Each velocity model is converted into a standardized (configure, make, make install) format that is easily downloaded and installed via the script. UCVM is often run in specialized high performance computing (HPC

  10. A new settling velocity model to describe secondary sedimentation

    DEFF Research Database (Denmark)

    Ramin, Elham; Wágner, Dorottya Sarolta; Yde, Lars

    2014-01-01

    Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids...... distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges...... associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM...

  11. A model relating Eulerian spatial and temporal velocity correlations

    Science.gov (United States)

    Cholemari, Murali R.; Arakeri, Jaywant H.

    2006-03-01

    In this paper we propose a model to relate Eulerian spatial and temporal velocity autocorrelations in homogeneous, isotropic and stationary turbulence. We model the decorrelation as the eddies of various scales becoming decorrelated. This enables us to connect the spatial and temporal separations required for a certain decorrelation through the ‘eddy scale’. Given either the spatial or the temporal velocity correlation, we obtain the ‘eddy scale’ and the rate at which the decorrelation proceeds. This leads to a spatial separation from the temporal correlation and a temporal separation from the spatial correlation, at any given value of the correlation relating the two correlations. We test the model using experimental data from a stationary axisymmetric turbulent flow with homogeneity along the axis.

  12. A new approach for modeling dry deposition velocity of particles

    Science.gov (United States)

    Giardina, M.; Buffa, P.

    2018-05-01

    The dry deposition process is recognized as an important pathway among the various removal processes of pollutants in the atmosphere. In this field, there are several models reported in the literature useful to predict the dry deposition velocity of particles of different diameters but many of them are not capable of representing dry deposition phenomena for several categories of pollutants and deposition surfaces. Moreover, their applications is valid for specific conditions and if the data in that application meet all of the assumptions required of the data used to define the model. In this paper a new dry deposition velocity model based on an electrical analogy schema is proposed to overcome the above issues. The dry deposition velocity is evaluated by assuming that the resistances that affect the particle flux in the Quasi-Laminar Sub-layers can be combined to take into account local features of the mutual influence of inertial impact processes and the turbulent one. Comparisons with the experimental data from literature indicate that the proposed model allows to capture with good agreement the main dry deposition phenomena for the examined environmental conditions and deposition surfaces to be determined. The proposed approach could be easily implemented within atmospheric dispersion modeling codes and efficiently addressing different deposition surfaces for several particle pollution.

  13. Modeling delamination of FRP laminates under low velocity impact

    Science.gov (United States)

    Jiang, Z.; Wen, H. M.; Ren, S. L.

    2017-09-01

    Fiber reinforced plastic laminates (FRP) have been increasingly used in various engineering such as aeronautics, astronautics, transportation, naval architecture and their impact response and failure are a major concern in academic community. A new numerical model is suggested for fiber reinforced plastic composites. The model considers that FRP laminates has been constituted by unidirectional laminated plates with adhesive layers. A modified adhesive layer damage model that considering strain rate effects is incorporated into the ABAQUS / EXPLICIT finite element program by the user-defined material subroutine VUMAT. It transpires that the present model predicted delamination is in good agreement with the experimental results for low velocity impact.

  14. Velocity profiles in idealized model of human respiratory tract

    Science.gov (United States)

    Elcner, J.; Jedelsky, J.; Lizal, F.; Jicha, M.

    2013-04-01

    This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  15. Velocity profiles in idealized model of human respiratory tract

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available This article deals with numerical simulation focused on velocity profiles in idealized model of human upper airways during steady inspiration. Three r gimes of breathing were investigated: Resting condition, Deep breathing and Light activity which correspond to most common regimes used for experiments and simulations. Calculation was validated with experimental data given by Phase Doppler Anemometry performed on the model with same geometry. This comparison was made in multiple points which form one cross-section in trachea near first bifurcation of bronchial tree. Development of velocity profile in trachea during steady inspiration was discussed with respect for common phenomenon formed in trachea and for future research of transport of aerosol particles in human respiratory tract.

  16. Estimation of spatial uncertainties of tomographic velocity models

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, M.; Du, Z.; Querendez, E. [SINTEF Petroleum Research, Trondheim (Norway)

    2012-12-15

    This research project aims to evaluate the possibility of assessing the spatial uncertainties in tomographic velocity model building in a quantitative way. The project is intended to serve as a test of whether accurate and specific uncertainty estimates (e.g., in meters) can be obtained. The project is based on Monte Carlo-type perturbations of the velocity model as obtained from the tomographic inversion guided by diagonal and off-diagonal elements of the resolution and the covariance matrices. The implementation and testing of this method was based on the SINTEF in-house stereotomography code, using small synthetic 2D data sets. To test the method the calculation and output of the covariance and resolution matrices was implemented, and software to perform the error estimation was created. The work included the creation of 2D synthetic data sets, the implementation and testing of the software to conduct the tests (output of the covariance and resolution matrices which are not implicitly provided by stereotomography), application to synthetic data sets, analysis of the test results, and creating the final report. The results show that this method can be used to estimate the spatial errors in tomographic images quantitatively. The results agree with' the known errors for our synthetic models. However, the method can only be applied to structures in the model, where the change of seismic velocity is larger than the predicted error of the velocity parameter amplitudes. In addition, the analysis is dependent on the tomographic method, e.g., regularization and parameterization. The conducted tests were very successful and we believe that this method could be developed further to be applied to third party tomographic images.

  17. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  18. Small velocity and finite temperature variations in kinetic relaxation models

    KAUST Repository

    Markowich, Peter; Jü ngel, Ansgar; Aoki, Kazuo

    2010-01-01

    A small Knuden number analysis of a kinetic equation in the diffusive scaling is performed. The collision kernel is of BGK type with a general local Gibbs state. Assuming that the flow velocity is of the order of the Knudsen number, a Hilbert expansion yields a macroscopic model with finite temperature variations, whose complexity lies in between the hydrodynamic and the energy-transport equations. Its mathematical structure is explored and macroscopic models for specific examples of the global Gibbs state are presented. © American Institute of Mathematical Sciences.

  19. Identifying Clusters with Mixture Models that Include Radial Velocity Observations

    Science.gov (United States)

    Czarnatowicz, Alexis; Ybarra, Jason E.

    2018-01-01

    The study of stellar clusters plays an integral role in the study of star formation. We present a cluster mixture model that considers radial velocity data in addition to spatial data. Maximum likelihood estimation through the Expectation-Maximization (EM) algorithm is used for parameter estimation. Our mixture model analysis can be used to distinguish adjacent or overlapping clusters, and estimate properties for each cluster.Work supported by awards from the Virginia Foundation for Independent Colleges (VFIC) Undergraduate Science Research Fellowship and The Research Experience @Bridgewater (TREB).

  20. Predicted and measured velocity distribution in a model heat exchanger

    International Nuclear Information System (INIS)

    Rhodes, D.B.; Carlucci, L.N.

    1984-01-01

    This paper presents a comparison between numerical predictions, using the porous media concept, and measurements of the two-dimensional isothermal shell-side velocity distributions in a model heat exchanger. Computations and measurements were done with and without tubes present in the model. The effect of tube-to-baffle leakage was also investigated. The comparison was made to validate certain porous media concepts used in a computer code being developed to predict the detailed shell-side flow in a wide range of shell-and-tube heat exchanger geometries

  1. Measured and modeled dry deposition velocities over the ESCOMPTE area

    Science.gov (United States)

    Michou, M.; Laville, P.; Serça, D.; Fotiadi, A.; Bouchou, P.; Peuch, V.-H.

    2005-03-01

    Measurements of the dry deposition velocity of ozone have been made by the eddy correlation method during ESCOMPTE (Etude sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions). The strong local variability of natural ecosystems was sampled over several weeks in May, June and July 2001 for four sites with varying surface characteristics. The sites included a maize field, a Mediterranean forest, a Mediterranean shrub-land, and an almost bare soil. Measurements of nitrogen oxide deposition fluxes by the relaxed eddy correlation method have also been carried out at the same bare soil site. An evaluation of the deposition velocities computed by the surface module of the multi-scale Chemistry and Transport Model MOCAGE is presented. This module relies on a resistance approach, with a detailed treatment of the stomatal contribution to the surface resistance. Simulations at the finest model horizontal resolution (around 10 km) are compared to observations. If the seasonal variations are in agreement with the literature, comparisons between raw model outputs and observations, at the different measurement sites and for the specific observing periods, are contrasted. As the simulated meteorology at the scale of 10 km nicely captures the observed situations, the default set of surface characteristics (averaged at the resolution of a grid cell) appears to be one of the main reasons for the discrepancies found with observations. For each case, sensitivity studies have been performed in order to see the impact of adjusting the surface characteristics to the observed ones, when available. Generally, a correct agreement with the observations of deposition velocities is obtained. This advocates for a sub-grid scale representation of surface characteristics for the simulation of dry deposition velocities over such a complex area. Two other aspects appear in the discussion. Firstly, the strong influence of the soil water content to the plant

  2. Zr Extrusion – Direct Input for Models & Validation

    Energy Technology Data Exchange (ETDEWEB)

    Cerreta, Ellen Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    As we examine differences in the high strain rate, high strain tensile response of high purity, highly textured Zr as a function of loading direction, temperature and extrusion velocity with primarily post mortem characterization techniques, we have also developed a technique for characterizing the in-situ extrusion process. This particular measurement is useful for partitioning energy of the system during the extrusion process: friction, kinetic energy, and temperature

  3. Effects of Adaptation on Discrimination of Whisker Deflection Velocity and Angular Direction in a Model of the Barrel Cortex

    Directory of Open Access Journals (Sweden)

    Mainak J. Patel

    2018-06-01

    Full Text Available Two important stimulus features represented within the rodent barrel cortex are velocity and angular direction of whisker deflection. Each cortical barrel receives information from thalamocortical (TC cells that relay information from a single whisker, and TC input is decoded by barrel regular-spiking (RS cells through a feedforward inhibitory architecture (with inhibition delivered by cortical fast-spiking or FS cells. TC cells encode deflection velocity through population synchrony, while deflection direction is encoded through the distribution of spike counts across the TC population. Barrel RS cells encode both deflection direction and velocity with spike rate, and are divided into functional domains by direction preference. Following repetitive whisker stimulation, system adaptation causes a weakening of synaptic inputs to RS cells and diminishes RS cell spike responses, though evidence suggests that stimulus discrimination may improve following adaptation. In this work, I construct a model of the TC, FS, and RS cells comprising a single barrel system—the model incorporates realistic synaptic connectivity and dynamics and simulates both angular direction (through the spatial pattern of TC activation and velocity (through synchrony of the TC population spikes of a deflection of the primary whisker, and I use the model to examine direction and velocity selectivity of barrel RS cells before and after adaptation. I find that velocity and direction selectivity of individual RS cells (measured over multiple trials sharpens following adaptation, but stimulus discrimination using a simple linear classifier by the RS population response during a single trial (a more biologically meaningful measure than single cell discrimination over multiple trials exhibits strikingly different behavior—velocity discrimination is similar both before and after adaptation, while direction classification improves substantially following adaptation. This is the

  4. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  5. Hydrodynamic Equations for Flocking Models without Velocity Alignment

    Science.gov (United States)

    Peruani, Fernando

    2017-10-01

    The spontaneous emergence of collective motion patterns is usually associated with the presence of a velocity alignment mechanism that mediates the interactions among the moving individuals. Despite of this widespread view, it has been shown recently that several flocking behaviors can emerge in the absence of velocity alignment and as a result of short-range, position-based, attractive forces that act inside a vision cone. Here, we derive the corresponding hydrodynamic equations of a microscopic position-based flocking model, reviewing and extending previous reported results. In particular, we show that three distinct macroscopic collective behaviors can be observed: i) the coarsening of aggregates with no orientational order, ii) the emergence of static, elongated nematic bands, and iii) the formation of moving, locally polar structures, which we call worms. The derived hydrodynamic equations indicate that active particles interacting via position-based interactions belong to a distinct class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems.

  6. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  7. The Three-Dimensional Velocity Distribution of Wide Gap Taylor-Couette Flow Modelled by CFD

    Directory of Open Access Journals (Sweden)

    David Shina Adebayo

    2016-01-01

    Full Text Available A numerical investigation is conducted for the flow between two concentric cylinders with a wide gap, relevant to bearing chamber applications. This wide gap configuration has received comparatively less attention than narrow gap journal bearing type geometries. The flow in the gap between an inner rotating cylinder and an outer stationary cylinder has been modelled as an incompressible flow using an implicit finite volume RANS scheme with the realisable k-ε model. The model flow is above the critical Taylor number at which axisymmetric counterrotating Taylor vortices are formed. The tangential velocity profiles at all axial locations are different from typical journal bearing applications, where the velocity profiles are quasilinear. The predicted results led to two significant findings of impact in rotating machinery operations. Firstly, the axial variation of the tangential velocity gradient induces an axially varying shear stress, resulting in local bands of enhanced work input to the working fluid. This is likely to cause unwanted heat transfer on the surface in high torque turbomachinery applications. Secondly, the radial inflow at the axial end-wall boundaries is likely to promote the transport of debris to the junction between the end-collar and the rotating cylinder, causing the build-up of fouling in the seal.

  8. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  9. Simulation of High Velocity Impact on Composite Structures - Model Implementation and Validation

    Science.gov (United States)

    Schueler, Dominik; Toso-Pentecôte, Nathalie; Voggenreiter, Heinz

    2016-08-01

    High velocity impact on composite aircraft structures leads to the formation of flexural waves that can cause severe damage to the structure. Damage and failure can occur within the plies and/or in the resin rich interface layers between adjacent plies. In the present paper a modelling methodology is documented that captures intra- and inter-laminar damage and their interrelations by use of shell element layers representing sub-laminates that are connected with cohesive interface layers to simulate delamination. This approach allows the simulation of large structures while still capturing the governing damage mechanisms and their interactions. The paper describes numerical algorithms for the implementation of a Ladevèze continuum damage model for the ply and methods to derive input parameters for the cohesive zone model. By comparison with experimental results from gas gun impact tests the potential and limitations of the modelling approach are discussed.

  10. Model-assisted measurements of suspension-feeding flow velocities.

    Science.gov (United States)

    Du Clos, Kevin T; Jones, Ian T; Carrier, Tyler J; Brady, Damian C; Jumars, Peter A

    2017-06-01

    Benthic marine suspension feeders provide an important link between benthic and pelagic ecosystems. The strength of this link is determined by suspension-feeding rates. Many studies have measured suspension-feeding rates using indirect clearance-rate methods, which are based on the depletion of suspended particles. Direct methods that measure the flow of water itself are less common, but they can be more broadly applied because, unlike indirect methods, direct methods are not affected by properties of the cleared particles. We present pumping rates for three species of suspension feeders, the clams Mya arenaria and Mercenaria mercenaria and the tunicate Ciona intestinalis , measured using a direct method based on particle image velocimetry (PIV). Past uses of PIV in suspension-feeding studies have been limited by strong laser reflections that interfere with velocity measurements proximate to the siphon. We used a new approach based on fitting PIV-based velocity profile measurements to theoretical profiles from computational fluid dynamic (CFD) models, which allowed us to calculate inhalant siphon Reynolds numbers ( Re ). We used these inhalant Re and measurements of siphon diameters to calculate exhalant Re , pumping rates, and mean inlet and outlet velocities. For the three species studied, inhalant Re ranged from 8 to 520, and exhalant Re ranged from 15 to 1073. Volumetric pumping rates ranged from 1.7 to 7.4 l h -1 for M . arenaria , 0.3 to 3.6 l h -1 for M . m ercenaria and 0.07 to 0.97 l h -1 for C . intestinalis We also used CFD models based on measured pumping rates to calculate capture regions, which reveal the spatial extent of pumped water. Combining PIV data with CFD models may be a valuable approach for future suspension-feeding studies. © 2017. Published by The Company of Biologists Ltd.

  11. Mean velocity and moments of turbulent velocity fluctuations in the wake of a model ship propulsor

    Energy Technology Data Exchange (ETDEWEB)

    Pego, J.P. [Universitaet Erlangen-Nuernberg, LSTM, Erlangen, Lehrstuhl fuer Stroemungsmechanik, Erlangen (Germany); Faculdade de Engenharia da Universidade do Porto, Porto (Portugal); Lienhart, H.; Durst, F. [Universitaet Erlangen-Nuernberg, LSTM, Erlangen, Lehrstuhl fuer Stroemungsmechanik, Erlangen (Germany)

    2007-08-15

    Pod drives are modern outboard ship propulsion systems with a motor encapsulated in a watertight pod, whose shaft is connected directly to one or two propellers. The whole unit hangs from the stern of the ship and rotates azimuthally, thus providing thrust and steering without the need of a rudder. Force/momentum and phase-resolved laser Doppler anemometry (LDA) measurements were performed for in line co-rotating and contra-rotating propellers pod drive models. The measurements permitted to characterize these ship propulsion systems in terms of their hydrodynamic characteristics. The torque delivered to the propellers and the thrust of the system were measured for different operation conditions of the propellers. These measurements lead to the hydrodynamic optimization of the ship propulsion system. The parameters under focus revealed the influence of distance between propeller planes, propeller frequency of rotation ratio and type of propellers (co- or contra-rotating) on the overall efficiency of the system. Two of the ship propulsion systems under consideration were chosen, based on their hydrodynamic characteristics, for a detailed study of the swirling wake flow by means of laser Doppler anemometry. A two-component laser Doppler system was employed for the velocity measurements. A light barrier mounted on the axle of the rear propeller motor supplied a TTL signal to mark the beginning of each period, thus providing angle information for the LDA measurements. Measurements were conducted for four axial positions in the slipstream of the pod drive models. The results show that the wake of contra-rotating propeller is more homogeneous than when they co-rotate. In agreement with the results of the force/momentum measurements and with hypotheses put forward in the literature (see e.g. Poehls in Entwurfsgrundlagen fuer Schraubenpropeller, 1984; Schneekluth in Hydromechanik zum Schiffsentwurf, 1988; Breslin and Andersen in Hydrodynamics of ship propellers, 1996

  12. Mean velocity and moments of turbulent velocity fluctuations in the wake of a model ship propulsor

    Science.gov (United States)

    Pêgo, J. P.; Lienhart, H.; Durst, F.

    2007-08-01

    Pod drives are modern outboard ship propulsion systems with a motor encapsulated in a watertight pod, whose shaft is connected directly to one or two propellers. The whole unit hangs from the stern of the ship and rotates azimuthally, thus providing thrust and steering without the need of a rudder. Force/momentum and phase-resolved laser Doppler anemometry (LDA) measurements were performed for in line co-rotating and contra-rotating propellers pod drive models. The measurements permitted to characterize these ship propulsion systems in terms of their hydrodynamic characteristics. The torque delivered to the propellers and the thrust of the system were measured for different operation conditions of the propellers. These measurements lead to the hydrodynamic optimization of the ship propulsion system. The parameters under focus revealed the influence of distance between propeller planes, propeller frequency of rotation ratio and type of propellers (co- or contra-rotating) on the overall efficiency of the system. Two of the ship propulsion systems under consideration were chosen, based on their hydrodynamic characteristics, for a detailed study of the swirling wake flow by means of laser Doppler anemometry. A two-component laser Doppler system was employed for the velocity measurements. A light barrier mounted on the axle of the rear propeller motor supplied a TTL signal to mark the beginning of each period, thus providing angle information for the LDA measurements. Measurements were conducted for four axial positions in the slipstream of the pod drive models. The results show that the wake of contra-rotating propeller is more homogeneous than when they co-rotate. In agreement with the results of the force/momentum measurements and with hypotheses put forward in the literature (see e.g. Poehls in Entwurfsgrundlagen für Schraubenpropeller, 1984; Schneekluth in Hydromechanik zum Schiffsentwurf, 1988; Breslin and Andersen in Hydrodynamics of ship propellers, 1996

  13. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  14. Remote Sensing Data in Wind Velocity Field Modelling: a Case Study from the Sudetes (SW Poland)

    Science.gov (United States)

    Jancewicz, Kacper

    2014-06-01

    The phenomena of wind-field deformation above complex (mountainous) terrain is a popular subject of research related to numerical modelling using GIS techniques. This type of modelling requires, as input data, information on terrain roughness and a digital terrain/elevation model. This information may be provided by remote sensing data. Consequently, its accuracy and spatial resolution may affect the results of modelling. This paper represents an attempt to conduct wind-field modelling in the area of the Śnieżnik Massif (Eastern Sudetes). The modelling process was conducted in WindStation 2.0.10 software (using the computable fluid dynamics solver Canyon). Two different elevation models were used: the Global Land Survey Digital Elevation Model (GLS DEM) and Digital Terrain Elevation Data (DTED) Level 2. The terrain roughness raster was generated on the basis of Corine Land Cover 2006 (CLC 2006) data. The output data were post-processed in ArcInfo 9.3.1 software to achieve a high-quality cartographic presentation. Experimental modelling was conducted for situations from 26 November 2011, 25 May 2012, and 26 May 2012, based on a limited number of field measurements and using parameters of the atmosphere boundary layer derived from the aerological surveys provided by the closest meteorological stations. The model was run in a 100-m and 250-m spatial resolution. In order to verify the model's performance, leave-one-out cross-validation was used. The calculated indices allowed for a comparison with results of former studies pertaining to WindStation's performance. The experiment demonstrated very subtle differences between results in using DTED or GLS DEM elevation data. Additionally, CLC 2006 roughness data provided more noticeable improvements in the model's performance, but only in the resolution corresponding to the original roughness data. The best input data configuration resulted in the following mean values of error measure: root mean squared error of velocity

  15. Stabilization and Riesz basis property for an overhead crane model with feedback in velocity and rotating velocity

    Directory of Open Access Journals (Sweden)

    Toure K. Augustin

    2014-06-01

    Full Text Available This paper studies a variant of an overhead crane model's problem, with a control force in velocity and rotating velocity on the platform. We obtain under certain conditions the well-posedness and the strong stabilization of the closed-loop system. We then analyze the spectrum of the system. Using a method due to Shkalikov, we prove the existence of a sequence of generalized eigenvectors of the system, which forms a Riesz basis for the state energy Hilbert space.

  16. Specification and Aggregation Errors in Environmentally Extended Input-Output Models

    NARCIS (Netherlands)

    Bouwmeester, Maaike C.; Oosterhaven, Jan

    This article considers the specification and aggregation errors that arise from estimating embodied emissions and embodied water use with environmentally extended national input-output (IO) models, instead of with an environmentally extended international IO model. Model specification errors result

  17. Traveling waves in an optimal velocity model of freeway traffic

    Science.gov (United States)

    Berg, Peter; Woods, Andrew

    2001-03-01

    Car-following models provide both a tool to describe traffic flow and algorithms for autonomous cruise control systems. Recently developed optimal velocity models contain a relaxation term that assigns a desirable speed to each headway and a response time over which drivers adjust to optimal velocity conditions. These models predict traffic breakdown phenomena analogous to real traffic instabilities. In order to deepen our understanding of these models, in this paper, we examine the transition from a linear stable stream of cars of one headway into a linear stable stream of a second headway. Numerical results of the governing equations identify a range of transition phenomena, including monotonic and oscillating travelling waves and a time- dependent dispersive adjustment wave. However, for certain conditions, we find that the adjustment takes the form of a nonlinear traveling wave from the upstream headway to a third, intermediate headway, followed by either another traveling wave or a dispersive wave further downstream matching the downstream headway. This intermediate value of the headway is selected such that the nonlinear traveling wave is the fastest stable traveling wave which is observed to develop in the numerical calculations. The development of these nonlinear waves, connecting linear stable flows of two different headways, is somewhat reminiscent of stop-start waves in congested flow on freeways. The different types of adjustments are classified in a phase diagram depending on the upstream and downstream headway and the response time of the model. The results have profound consequences for autonomous cruise control systems. For an autocade of both identical and different vehicles, the control system itself may trigger formations of nonlinear, steep wave transitions. Further information is available [Y. Sugiyama, Traffic and Granular Flow (World Scientific, Singapore, 1995), p. 137].

  18. RadVel: The Radial Velocity Modeling Toolkit

    Science.gov (United States)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  19. A detonation model of high/low velocity detonation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Shaoming; Li, Chenfang; Ma, Yunhua; Cui, Junmin [Xian Modern Chemistry Research Institute, Xian, 710065 (China)

    2007-02-15

    A new detonation model that can simulate both high and low velocity detonations is established using the least action principle. The least action principle is valid for mechanics and thermodynamics associated with a detonation process. Therefore, the least action principle is valid in detonation science. In this model, thermodynamic equilibrium state is taken as the known final point of the detonation process. Thermodynamic potentials are analogous to mechanical ones, and the Lagrangian function in the detonation process is L=T-V. Under certain assumptions, the variation calculus of the Lagrangian function gives two solutions: the first one is a constant temperature solution, and the second one is the solution of an ordinary differential equation. A special solution of the ordinary differential equation is given. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  20. Modelling low velocity impact induced damage in composite laminates

    Science.gov (United States)

    Shi, Yu; Soutis, Constantinos

    2017-12-01

    The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.

  1. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients

  2. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  3. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  4. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  5. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  6. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  7. Validating Material Modelling of OFHC Copper Using Dynamic Tensile Extrusion (DTE) Test at Different Impact Velocity

    Science.gov (United States)

    Bonora, Nicola; Testa, Gabriel; Ruggiero, Andrew; Iannitti, Gianluca; Hörnqvist, Magnus; Mortazavi, Nooshin

    2015-06-01

    In the Dynamic Tensile Extrusion (DTE) test, the material is subjected to very large strain, high strain rate and elevated temperature. Numerical simulation, validated comparing with measurements obtained on soft-recovered extruded fragments, can be used to probe material response under such extreme conditions and to assess constitutive models. In this work, the results of a parametric investigation on the simulation of DTE test of annealed OFHC copper - at impact velocity ranging from 350 up to 420 m/s - using phenomenological and physically based models (Johnson-Cook, Zerilli-Armstrong and Rusinek-Klepaczko), are presented. Preliminary simulation of microstructure evolution was performed using crystal plasticity package CPFEM, providing, as input, the strain history obtained with FEM at selected locations along the extruded fragments. Results were compared with EBSD investigation.

  8. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  9. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  10. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available Topic models explain a collection of documents with a small set of distributions over terms. These distributions over terms define the topics. Topic models ignore the structure of documents and use a bag-of-words approach which relies solely...

  11. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  12. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  13. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  14. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  15. High Flux Isotope Reactor system RELAP5 input model

    International Nuclear Information System (INIS)

    Morris, D.G.; Wendel, M.W.

    1993-01-01

    A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model

  16. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  17. Reissner-Mindlin plate model with uncertain input data

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan; Chleboun, J.

    2014-01-01

    Roč. 17, Jun (2014), s. 71-88 ISSN 1468-1218 Institutional support: RVO:67985840 Keywords : Reissner-Mindlin model * orthotropic plate Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121813001077

  18. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima; Laleg-Kirati, Taous-Meriem

    2017-01-01

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order

  19. Shallow Crustal Structure in the Northern Salton Trough, California: Insights from a Detailed 3-D Velocity Model

    Science.gov (United States)

    Ajala, R.; Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2017-12-01

    The Coachella Valley is the northern extent of the Gulf of California-Salton Trough. It contains the southernmost segment of the San Andreas Fault (SAF) for which a magnitude 7.8 earthquake rupture was modeled to help produce earthquake planning scenarios. However, discrepancies in ground motion and travel-time estimates from the current Southern California Earthquake Center (SCEC) velocity model of the Salton Trough highlight inaccuracies in its shallow velocity structure. An improved 3-D velocity model that better defines the shallow basin structure and enables the more accurate location of earthquakes and identification of faults is therefore essential for seismic hazard studies in this area. We used recordings of 126 explosive shots from the 2011 Salton Seismic Imaging Project (SSIP) to SSIP receivers and Southern California Seismic Network (SCSN) stations. A set of 48,105 P-wave travel time picks constituted the highest-quality input to a 3-D tomographic velocity inversion. To improve the ray coverage, we added network-determined first arrivals at SCSN stations from 39,998 recently relocated local earthquakes, selected to a maximum focal depth of 10 km, to develop a detailed 3-D P-wave velocity model for the Coachella Valley with 1-km grid spacing. Our velocity model shows good resolution ( 50 rays/cubic km) down to a minimum depth of 7 km. Depth slices from the velocity model reveal several interesting features. At shallow depths ( 3 km), we observe an elongated trough of low velocity, attributed to sediments, located subparallel to and a few km SW of the SAF, and a general velocity structure that mimics the surface geology of the area. The persistence of the low-velocity sediments to 5-km depth just north of the Salton Sea suggests that the underlying basement surface, shallower to the NW, dips SE, consistent with interpretation from gravity studies (Langenheim et al., 2005). On the western side of the Coachella Valley, we detect depth-restricted regions of

  20. Shear wave crustal velocity model of the Western Bohemian Massif from Love wave phase velocity dispersion

    Czech Academy of Sciences Publication Activity Database

    Kolínský, Petr; Málek, Jiří; Brokešová, J.

    2011-01-01

    Roč. 15, č. 1 (2011), s. 81-104 ISSN 1383-4649 R&D Projects: GA AV ČR IAA300460602; GA AV ČR IAA300460705; GA ČR(CZ) GA205/06/1780 Institutional research plan: CEZ:AV0Z30460519 Keywords : love waves * phase velocity dispersion * frequency-time analysis Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.326, year: 2011 www.springerlink.com/content/w3149233l60111t1/

  1. Little Higgs model limits from LHC - Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel de

    2013-07-01

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb -1 of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  2. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  3. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  4. Wideband Small-Signal Input dq Admittance Modeling of Six-Pulse Diode Rectifiers

    DEFF Research Database (Denmark)

    Yue, Xiaolong; Wang, Xiongfei; Blaabjerg, Frede

    2018-01-01

    This paper studies the wideband small-signal input dq admittance of six-pulse diode rectifiers. Considering the frequency coupling introduced by ripple frequency harmonics of d-and q-channel switching function, the proposed model successfully predicts the small-signal input dq admittance of six......-pulse diode rectifiers in high frequency regions that existing models fail to explain. Simulation and experimental results verify the accuracy of the proposed model....

  5. A Design Method of Robust Servo Internal Model Control with Control Input Saturation

    OpenAIRE

    山田, 功; 舩見, 洋祐

    2001-01-01

    In the present paper, we examine a design method of robust servo Internal Model Control with control input saturation. First of all, we clarify the condition that Internal Model Control has robust servo characteristics for the system with control input saturation. From this consideration, we propose new design method of Internal Model Control with robust servo characteristics. A numerical example to illustrate the effectiveness of the proposed method is shown.

  6. Tumor Growth Model with PK Input for Neuroblastoma Drug Development

    Science.gov (United States)

    2015-09-01

    Your credit card order has been processed on  Tuesday  2 December 2014 at 3:05 PM. Status: Complete 12/3/2014 Oasis, The Online Abstract Submission System...pharmacokinetic models. Toxicol Ind Health, 1997. 13(4): p. 407-84. PMID: 9249929 4. Davies, B. and T. Morris , Physiological parameters in laboratory animals and humans. Pharm Res, 1993. 10(7): p. 1093-5. PMID: 8378254

  7. Discrete Velocity Models for Polyatomic Molecules Without Nonphysical Collision Invariants

    Science.gov (United States)

    Bernhoff, Niclas

    2018-05-01

    An important aspect of constructing discrete velocity models (DVMs) for the Boltzmann equation is to obtain the right number of collision invariants. Unlike for the Boltzmann equation, for DVMs there can appear extra collision invariants, so called spurious collision invariants, in plus to the physical ones. A DVM with only physical collision invariants, and hence, without spurious ones, is called normal. The construction of such normal DVMs has been studied a lot in the literature for single species, but also for binary mixtures and recently extensively for multicomponent mixtures. In this paper, we address ways of constructing normal DVMs for polyatomic molecules (here represented by that each molecule has an internal energy, to account for non-translational energies, which can change during collisions), under the assumption that the set of allowed internal energies are finite. We present general algorithms for constructing such models, but we also give concrete examples of such constructions. This approach can also be combined with similar constructions of multicomponent mixtures to obtain multicomponent mixtures with polyatomic molecules, which is also briefly outlined. Then also, chemical reactions can be added.

  8. Description of the CONTAIN input model for the Dodewaard nuclear power plant

    International Nuclear Information System (INIS)

    Velema, E.J.

    1992-02-01

    This report describes the ECN standard CONTAIN input model for the Dodewaard Nuclear Power Plant (NPP) that has been developed by ECN. This standard input model will serve as a basis for analyses of the phenomena which may occur inside the Dodewaard containment in the event of a postulated severe accident. Boundary conditions for specific containment analyses can easily be implemented in the input model. as a result ECN will be able to respond quickly on requests for analyses from the utilities of the authorities. The report also includes brief descriptions of the Dodewaard NPP and the CONTAIN computer program. (author). 7 refs.; 5 figs.; 3 tabs

  9. Little Higgs model limits from LHC - Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel. de

    2013-07-15

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb{sup -1} of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  10. Results of verification and investigation of wind velocity field forecast. Verification of wind velocity field forecast model

    International Nuclear Information System (INIS)

    Ogawa, Takeshi; Kayano, Mitsunaga; Kikuchi, Hideo; Abe, Takeo; Saga, Kyoji

    1995-01-01

    In Environmental Radioactivity Research Institute, the verification and investigation of the wind velocity field forecast model 'EXPRESS-1' have been carried out since 1991. In fiscal year 1994, as the general analysis, the validity of weather observation data, the local features of wind field, and the validity of the positions of monitoring stations were investigated. The EXPRESS which adopted 500 m mesh so far was improved to 250 m mesh, and the heightening of forecast accuracy was examined, and the comparison with another wind velocity field forecast model 'SPEEDI' was carried out. As the results, there are the places where the correlation with other points of measurement is high and low, and it was found that for the forecast of wind velocity field, by excluding the data of the points with low correlation or installing simplified observation stations to take their data in, the forecast accuracy is improved. The outline of the investigation, the general analysis of weather observation data and the improvements of wind velocity field forecast model and forecast accuracy are reported. (K.I.)

  11. Modeling and Control of a Dual-Input Isolated Full-Bridge Boost Converter

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2012-01-01

    In this paper, a steady-state model, a large-signal (LS) model and an ac small-signal (SS) model for a recently proposed dual-input transformer-isolated boost converter are derived respectively by the switching flow-graph (SFG) nonlinear modeling technique. Based upon the converter’s model...

  12. Mechanistic interpretation of glass reaction: Input to kinetic model development

    International Nuclear Information System (INIS)

    Bates, J.K.; Ebert, W.L.; Bradley, J.P.; Bourcier, W.L.

    1991-05-01

    Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

  13. Comparison of CME radial velocities from a flux rope model and an ice cream cone model

    Science.gov (United States)

    Kim, T.; Moon, Y.; Na, H.

    2011-12-01

    Coronal Mass Ejections (CMEs) on the Sun are the largest energy release process in the solar system and act as the primary driver of geomagnetic storms and other space weather phenomena on the Earth. So it is very important to infer their directions, velocities and three-dimensional structures. In this study, we choose two different models to infer radial velocities of halo CMEs since 2008 : (1) an ice cream cone model by Xue et al (2005) using SOHO/LASCO data, (2) a flux rope model by Thernisien et al. (2009) using the STEREO/SECCHI data. In addition, we use another flux rope model in which the separation angle of flux rope is zero, which is morphologically similar to the ice cream cone model. The comparison shows that the CME radial velocities from among each model have very good correlations (R>0.9). We will extending this comparison to other partial CMEs observed by STEREO and SOHO.

  14. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  15. Numerical modeling of probe velocity effects for electromagnetic NDE methods

    Science.gov (United States)

    Shin, Y. K.; Lord, W.

    The present discussion of magnetic flux (MLF) leakage inspection introduces the behavior of motion-induced currents. The results obtained indicate that velocity effects exist at even low probe speeds for magnetic materials, compelling the inclusion of velocity effects in MLF testing of oil pipelines, where the excitation level and pig speed are much higher than those used in the present work. Probe velocity effect studies should influence probe design, defining suitable probe speed limits and establishing training guidelines for defect-characterization schemes.

  16. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  17. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    Science.gov (United States)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  18. Velocity measurement of model vertical axis wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, D.A.; McWilliam, M. [Waterloo Univ., ON (Canada). Dept. of Mechanical Engineering

    2006-07-01

    An increasingly popular solution to future energy demand is wind energy. Wind turbine designs can be grouped according to their axis of rotation, either horizontal or vertical. Horizontal axis wind turbines have higher power output in a good wind regime than vertical axis turbines and are used in most commercial class designs. Vertical axis Savonius-based wind turbine designs are still widely used in some applications because of their simplistic design and low wind speed performance. There are many design variables that must be considered in order to optimize the power output in a given wind regime in a typical wind turbine design. Using particle image velocimetry, a study of the air flow around five different model vertical axis wind turbines was conducted in a closed loop wind tunnel. A standard Savonius design with two semi-circular blades overlapping, and two variations of this design, a deep blade and a shallow blade design were among the turbine models included in this study. It also evaluated alternate designs that attempt to increase the performance of the standard design by allowing compound blade curvature. Measurements were collected at a constant phase angle and also at random rotor orientations. It was found that evaluation of the flow patterns and measured velocities revealed consistent and stable flow patterns at any given phase angle. Large scale flow structures are evident in all designs such as vortices shed from blade surfaces. An important performance parameter was considered to be the ability of the flow to remain attached to the forward blade and redirect and reorient the flow to the following blade. 6 refs., 18 figs.

  19. Pandemic recovery analysis using the dynamic inoperability input-output model.

    Science.gov (United States)

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  20. Study of the velocity distribution influence upon the pressure pulsations in draft tube model of hydro-turbine

    Science.gov (United States)

    Sonin, V.; Ustimenko, A.; Kuibin, P.; Litvinov, I.; Shtork, S.

    2016-11-01

    One of the mechanisms of generation of powerful pressure pulsations in the circuit of the turbine is a precessing vortex core, formed behind the runner at the operation points with partial or forced loads, when the flow has significant residual swirl. To study periodic pressure pulsations behind the runner the authors of this paper use approaches of experimental modeling and methods of computational fluid dynamics. The influence of velocity distributions at the output of the hydro turbine runner on pressure pulsations was studied based on analysis of the existing and possible velocity distributions in hydraulic turbines and selection of the distribution in the extended range. Preliminary numerical calculations have showed that the velocity distribution can be modeled without reproduction of the entire geometry of the circuit, using a combination of two blade cascades of the rotor and stator. Experimental verification of numerical results was carried out in an air bench, using the method of 3D-printing for fabrication of the blade cascades and the geometry of the draft tube of hydraulic turbine. Measurements of the velocity field at the input to a draft tube cone and registration of pressure pulsations due to precessing vortex core have allowed building correlations between the velocity distribution character and the amplitude-frequency characteristics of the pulsations.

  1. Development of an Input Model to MELCOR 1.8.5 for the Ringhals 3 PWR

    International Nuclear Information System (INIS)

    Nilsson, Lars

    2004-12-01

    An input file to the severe accident code MELCOR 1.8.5 has been developed for the Swedish pressurized water reactor Ringhals 3. The aim was to produce a file that can be used for calculations of various postulated severe accident scenarios, although the first application is specifically on cases involving large hydrogen production. The input file is rather detailed with individual modelling of all three cooling loops. The report describes the basis for the Ringhals 3 model and the input preparation step by step and is illustrated by nodalization schemes of the different plant systems. Present version of the report is restricted to the fundamental MELCOR input preparation, and therefore most of the figures of Ringhals 3 measurements and operating parameters are excluded here. These are given in another, complete version of the report, for limited distribution, which includes tables for pertinent data of all components. That version contains appendices with a complete listing of the input files as well as tables of data compiled from a RELAP5 file, that was a major basis for the MELCOR input for the cooling loops. The input was tested in steady-state calculations in order to simulate the initial conditions at current nominal operating conditions in Ringhals 3 for 2775 MW thermal power. The results of the steady-state calculations are presented in the report. Calculations with the MELCOR model will then be carried out of certain accident sequences for comparison with results from earlier MAAP4 calculations. That work will be reported separately

  2. Development of the RETRAN input model for Ulchin 3/4 visual system analyzer

    International Nuclear Information System (INIS)

    Lee, S. W.; Kim, K. D.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.; Hwang, M. K.

    2004-01-01

    As a part of the Long-Term Nuclear R and D program, KAERI has developed the so-called Visual System Analyzer (ViSA) based on best-estimate codes. The MARS and RETRAN codes are used as the best-estimate codes for ViSA. Between these two codes, the RETRAN code is used for realistic analysis of Non-LOCA transients and small-break loss-of-coolant accidents, of which break size is less than 3 inch diameter. So it is necessary to develop the RETRAN input model for Ulchin 3/4 plants (KSNP). In recognition of this, the RETRAN input model for Ulchin 3/4 plants has been developed. This report includes the input model requirements and the calculation note for the input data generation (see the Appendix). In order to confirm the validity of the input data, the calculations are performed for a steady state at 100 % power operation condition, inadvertent reactor trip and RCP trip. The results of the steady-state calculation agree well with the design data. The results of the other transient calculations seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the RETRAN input data can be used as a base input deck for the RETRAN transient analyzer for Ulchin 3/4. Moreover, it is found that Core Protection Calculator (CPC) module, which is modified by Korea Electric Power Research Institute (KEPRI), is well adapted to ViSA

  3. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  4. Modeling of heat transfer into a heat pipe for a localized heat input zone

    International Nuclear Information System (INIS)

    Rosenfeld, J.H.

    1987-01-01

    A general model is presented for heat transfer into a heat pipe using a localized heat input. Conduction in the wall of the heat pipe and boiling in the interior structure are treated simultaneously. The model is derived from circumferential heat transfer in a cylindrical heat pipe evaporator and for radial heat transfer in a circular disk with boiling from the interior surface. A comparison is made with data for a localized heat input zone. Agreement between the theory and the model is good. This model can be used for design purposes if a boiling correlation is available. The model can be extended to provide improved predictions of heat pipe performance

  5. An extended continuum model considering optimal velocity change with memory and numerical tests

    Science.gov (United States)

    Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng

    2018-01-01

    In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.

  6. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  7. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Addo, Peter Martey

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  8. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  9. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  10. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  11. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  12. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  13. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  14. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  15. A comparative study of velocity increment generation between the rigid body and flexible models of MMET

    Energy Technology Data Exchange (ETDEWEB)

    Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my [School of Aerospace Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Pulau Pinang (Malaysia)

    2016-02-01

    The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments of the tether.

  16. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  17. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  18. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent....

  19. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  20. DIMITRI 1.0: Beschrijving en toepassing van een dynamisch input-output model

    NARCIS (Netherlands)

    Wilting HC; Blom WF; Thomas R; Idenburg AM; LAE

    2001-01-01

    DIMITRI, the Dynamic Input-Output Model to study the Impacts of Technology Related Innovations, was developed in the framework of the RIVM Environment and Economy project to answer questions about interrelationships between economy, technology and the environment. DIMITRI, a meso-economic model,

  1. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  2. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  3. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  4. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  5. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  6. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results

  7. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  8. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  9. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    Science.gov (United States)

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  10. A Model to Determinate the Influence of Probability Density Functions (PDFs of Input Quantities in Measurements

    Directory of Open Access Journals (Sweden)

    Jesús Caja

    2016-06-01

    Full Text Available A method for analysing the effect of different hypotheses about the type of the input quantities distributions of a measurement model is presented here so that the developed algorithms can be simplified. As an example, a model of indirect measurements with optical coordinate measurement machine was employed to evaluate these different hypotheses. As a result of the different experiments, the assumption that the different variables of the model can be modelled as normal distributions is proved.

  11. Multijam Solutions in Traffic Models with Velocity-Dependent Driver Strategies

    DEFF Research Database (Denmark)

    Carter, Paul; Christiansen, Peter Leth; Gaididei, Yuri B.

    2014-01-01

    The optimal-velocity follow-the-leader model is augmented with an equation that allows each driver to adjust their target headway according to the velocity difference between the driver and the car in front. In this more detailed model, which is investigated on a ring, stable and unstable multipu...

  12. How model and input uncertainty impact maize yield simulations in West Africa

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  13. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  14. Sensitivity of a complex urban air quality model to input data

    International Nuclear Information System (INIS)

    Seigneur, C.; Tesche, T.W.; Roth, P.M.; Reid, L.E.

    1981-01-01

    In recent years, urban-scale photochemical simulation models have been developed that are of practical value for predicting air quality and analyzing the impacts of alternative emission control strategies. Although the performance of some urban-scale models appears to be acceptable, the demanding data requirements of such models have prompted concern about the costs of data acquistion, which might be high enough to preclude use of photochemical models for many urban areas. To explore this issue, sensitivity studies with the Systems Applications, Inc. (SAI) Airshed Model, a grid-based time-dependent photochemical dispersion model, have been carried out for the Los Angeles basin. Reductions in the amount and quality of meteorological, air quality and emission data, as well as modifications of the model gridded structure, have been analyzed. This paper presents and interprets the results of 22 sensitivity studies. A sensitivity-uncertainty index is defined to rank input data needs for an urban photochemical model. The index takes into account the sensitivity of model predictions to the amount of input data, the costs of data acquistion, and the uncertainties in the air quality model input variables. The results of these sensitivity studies are considered in light of the limitations of specific attributes of the Los Angeles basin and of the modeling conditions (e.g., choice of wind model, length of simulation time). The extent to which the results may be applied to other urban areas also is discussed

  15. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  16. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  17. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  18. COGEDIF - automatic TORT and DORT input generation from MORSE combinatorial geometry models

    International Nuclear Information System (INIS)

    Castelli, R.A.; Barnett, D.A.

    1992-01-01

    COGEDIF is an interactive utility which was developed to automate the preparation of two and three dimensional geometrical inputs for the ORNL-TORT and DORT discrete ordinates programs from complex three dimensional models described using the MORSE combinatorial geometry input description. The program creates either continuous or disjoint mesh input based upon the intersections of user defined meshing planes and the MORSE body definitions. The composition overlay of the combinatorial geometry is used to create the composition mapping of the discretized geometry based upon the composition found at the centroid of each of the mesh cells. This program simplifies the process of using discrete orthogonal mesh cells to represent non-orthogonal geometries in large models which require mesh sizes of the order of a million cells or more. The program was specifically written to take advantage of the new TORT disjoint mesh option which was developed at ORNL

  19. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  20. Statistical Analysis of Input Parameters Impact on the Modelling of Underground Structures

    Directory of Open Access Journals (Sweden)

    M. Hilar

    2008-01-01

    Full Text Available The behaviour of a geomechanical model and its final results are strongly affected by the input parameters. As the inherent variability of rock mass is difficult to model, engineers are frequently forced to face the question “Which input values should be used for analyses?” The correct answer to such a question requires a probabilistic approach, considering the uncertainty of site investigations and variation in the ground. This paper describes the statistical analysis of input parameters for FEM calculations of traffic tunnels in the city of Prague. At the beginning of the paper, the inaccuracy in the geotechnical modelling is discussed. In the following part the Fuzzy techniques are summarized, including information about an application of the Fuzzy arithmetic on the shotcrete parameters. The next part of the paper is focused on the stochastic simulation – Monte Carlo Simulation is briefly described, Latin Hypercubes method is described more in details. At the end several practical examples are described: statistical analysis of the input parameters on the numerical modelling of the completed Mrázovka tunnel (profile West Tunnel Tube km 5.160 and modelling of the constructed tunnel Špejchar – Pelc Tyrolka. 

  1. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  2. Input-output and energy demand models for Ireland: Data collection report. Part 1: EXPLOR

    Energy Technology Data Exchange (ETDEWEB)

    Henry, E W; Scott, S

    1981-01-01

    Data are presented in support of EXPLOR, an input-output economic model for Ireland. The data follow the listing of exogenous data-sets used by Batelle in document X11/515/77. Data are given for 1974, 1980, and 1985 and consist of household consumption, final demand-production, and commodity prices. (ACR)

  3. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  4. Input-Output model for waste management plan for Nigeria | Njoku ...

    African Journals Online (AJOL)

    An Input-Output Model for Waste Management Plan has been developed for Nigeria based on Leontief concept and life cycle analysis. Waste was considered as source of pollution, loss of resources, and emission of green house gasses from bio-chemical treatment and decomposition, with negative impact on the ...

  5. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  6. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  7. Prediction of Chl-a concentrations in an eutrophic lake using ANN models with hybrid inputs

    Science.gov (United States)

    Aksoy, A.; Yuzugullu, O.

    2017-12-01

    Chlorophyll-a (Chl-a) concentrations in water bodies exhibit both spatial and temporal variations. As a result, frequent sampling is required with higher number of samples. This motivates the use of remote sensing as a monitoring tool. Yet, prediction performances of models that convert radiance values into Chl-a concentrations can be poor in shallow lakes. In this study, Chl-a concentrations in Lake Eymir, a shallow eutrophic lake in Ankara (Turkey), are determined using artificial neural network (ANN) models that use hybrid inputs composed of water quality and meteorological data as well as remotely sensed radiance values to improve prediction performance. Following a screening based on multi-collinearity and principal component analysis (PCA), dissolved-oxygen concentration (DO), pH, turbidity, and humidity were selected among several parameters as the constituents of the hybrid input dataset. Radiance values were obtained from QuickBird-2 satellite. Conversion of the hybrid input into Chl-a concentrations were studied for two different periods in the lake. ANN models were successful in predicting Chl-a concentrations. Yet, prediction performance declined for low Chl-a concentrations in the lake. In general, models with hybrid inputs were superior over the ones that solely used remotely sensed data.

  8. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  10. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  11. Nonaligned shocks for discrete velocity models of the Boltzmann equation

    Directory of Open Access Journals (Sweden)

    J. M. Greenberg

    1991-05-01

    Full Text Available At the conclusion of I. Bonzani's presentation on the existence of structured shock solutions to the six-velocity, planar, discrete Boltzmann equation (with binary and triple collisions, Greenberg asked whether such solutions were possible in directions e(α=(cosα ,sinα when α was not one of the particle flow directions. This question generated a spirited discussion but the question was still open at the conclusion of the conference. In this note the author will provide a partial resolution to the question raised above. Using formal perturbation arguments he will produce approximate solutions to the equation considered by Bonzani which represent traveling waves propagating in any direction e(α=(cosα ,sinα.

  12. Modeling of velocity field for vacuum induction melting process

    Institute of Scientific and Technical Information of China (English)

    CHEN Bo; JIANG Zhi-guo; LIU Kui; LI Yi-yi

    2005-01-01

    The numerical simulation for the recirculating flow of melting of an electromagnetically stirred alloy in a cylindrical induction furnace crucible was presented. Inductive currents and electromagnetic body forces in the alloy under three different solenoid frequencies and three different melting powers were calculated, and then the forces were adopted in the fluid flow equations to simulate the flow of the alloy and the behavior of the free surface. The relationship between the height of the electromagnetic stirring meniscus, melting power, and solenoid frequency was derived based on the law of mass conservation. The results show that the inductive currents and the electromagnetic forces vary with the frequency, melting power, and the physical properties of metal. The velocity and the height of the meniscus increase with the increase of the melting power and the decrease of the solenoid frequency.

  13. Velocity potential formulations of highly accurate Boussinesq-type models

    DEFF Research Database (Denmark)

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    , B., 2006. A Boussinesq-type method for fully nonlinear waves interacting with a rapidly varying bathymetry. Coast. Eng. 53, 487-504); Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave processes on the weather side of reflective...... with the kinematic bottom boundary condition. The true behaviour of the velocity potential formulation with respect to linear shoaling is given for the first time, correcting errors made by Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave...... processes on the weather side of reflective structures. Coast. Eng. 53, 929-945). An exact infinite series solution for the potential is obtained via a Taylor expansion about an arbitrary vertical position z=(z) over cap. For practical implementation however, the solution is expanded based on a slow...

  14. Velocity Deficits in the Wake of Model Lemon Shark Dorsal Fins Measured with Particle Image Velocimetry

    Science.gov (United States)

    Terry, K. N.; Turner, V.; Hackett, E.

    2017-12-01

    Aquatic animals' morphology provides inspiration for human technological developments, as their bodies have evolved and become adapted for efficient swimming. Lemon sharks exhibit a uniquely large second dorsal fin that is nearly the same size as the first fin, the hydrodynamic role of which is unknown. This experimental study looks at the drag forces on a scale model of the Lemon shark's unique two-fin configuration in comparison to drag forces on a more typical one-fin configuration. The experiments were performed in a recirculating water flume, where the wakes behind the scale models are measured using particle image velocimetry. The experiments are performed at three different flow speeds for both fin configurations. The measured instantaneous 2D distributions of the streamwise and wall-normal velocity components are ensemble averaged to generate streamwise velocity vertical profiles. In addition, velocity deficit profiles are computed from the difference between these mean streamwise velocity profiles and the free stream velocity, which is computed based on measured flow rates during the experiments. Results show that the mean velocities behind the fin and near the fin tip are smallest and increase as the streamwise distance from the fin tip increases. The magnitude of velocity deficits increases with increasing flow speed for both fin configurations, but at all flow speeds, the two-fin configurations generate larger velocity deficits than the one-fin configurations. Because the velocity deficit is directly proportional to the drag force, these results suggest that the two-fin configuration produces more drag.

  15. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  16. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  17. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  18. ANALYSIS OF THE BANDUNG CHANGES EXCELLENT POTENTIAL THROUGH INPUT-OUTPUT MODEL USING INDEX LE MASNE

    Directory of Open Access Journals (Sweden)

    Teti Sofia Yanti

    2017-03-01

    Full Text Available Input-Output Table is arranged to present an overview of the interrelationships and interdependence between units of activity (sector production in the whole economy. Therefore the input-output models are complete and comprehensive analytical tool. The usefulness of input-output tables is an analysis of the economic structure of the national/regional level which covers the structure of production and value-added (GDP of each sector. For the purposes of planning and evaluation of the outcomes of development that is comprehensive both national and smaller scale (district/city, a model for regional development planning approach can use the model input-output analysis. Analysis of Bandung Economic Structure did use Le Masne index, by comparing the coefficients of the technology in 2003 and 2008, of which nearly 50% change. The trade sector has grown very conspicuous than other areas, followed by the services of road transport and air transport services, the development priorities and investment Bandung should be directed to these areas, this is due to these areas can be thrust and be power attraction for the growth of other areas. The areas that experienced the highest decrease was Industrial Chemicals and Goods from Chemistry, followed by Oil and Refinery Industry Textile Industry Except For Garment.

  19. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....

  20. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  1. Modelling groundwater discharge areas using only digital elevation models as input data

    International Nuclear Information System (INIS)

    Brydsten, Lars

    2006-10-01

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  2. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  3. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    OpenAIRE

    Liu, Bing; Xu, Ling; Kang, Baolin

    2013-01-01

    By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity ...

  4. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  5. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    Science.gov (United States)

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  6. High Resolution Modeling of the Thermospheric Response to Energy Inputs During the RENU-2 Rocket Flight

    Science.gov (United States)

    Walterscheid, R. L.; Brinkman, D. G.; Clemmons, J. H.; Hecht, J. H.; Lessard, M.; Fritz, B.; Hysell, D. L.; Clausen, L. B. N.; Moen, J.; Oksavik, K.; Yeoman, T. K.

    2017-12-01

    The Earth's magnetospheric cusp provides direct access of energetic particles to the thermosphere. These particles produce ionization and kinetic (particle) heating of the atmosphere. The increased ionization coupled with enhanced electric fields in the cusp produces increased Joule heating and ion drag forcing. These energy inputs cause large wind and temperature changes in the cusp region. The Rocket Experiment for Neutral Upwelling -2 (RENU-2) launched from Andoya, Norway at 0745UT on 13 December 2015 into the ionosphere-thermosphere beneath the magnetic cusp. It made measurements of the energy inputs (e.g., precipitating particles, electric fields) and the thermospheric response to these energy inputs (e.g., neutral density and temperature, neutral winds). Complementary ground based measurements were made. In this study, we use a high resolution two-dimensional time-dependent non hydrostatic nonlinear dynamical model driven by rocket and ground based measurements of the energy inputs to simulate the thermospheric response during the RENU-2 flight. Model simulations will be compared to the corresponding measurements of the thermosphere to see what they reveal about thermospheric structure and the nature of magnetosphere-ionosphere-thermosphere coupling in the cusp. Acknowledgements: This material is based upon work supported by the National Aeronautics and Space Administration under Grants: NNX16AH46G and NNX13AJ93G. This research was also supported by The Aerospace Corporation's Technical Investment program

  7. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    Directory of Open Access Journals (Sweden)

    Marek Antosiewicz

    2016-04-01

    Full Text Available Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of resource-saving technologies. We calibrate the model for the EU27 area using an IO matrix. We consider taxation introduced from 2021 and simulate its impact until 2050. We compare the taxes along their ability to induce reduction in material use and raise revenue. We also consider the effect of spending this revenue on reduction of labour taxation. We find that input and output taxation create contrasting incentives and have opposite effects on resource efficiency. The material input tax induces investment in efficiency-improving technology which, in the long term, results in GDP and employment by 15%–20% higher than in the case of a comparable output tax. We also find that using revenues to reduce taxes on labour has stronger beneficial effects for the input tax.

  8. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  9. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... benchmarks often used for validation of deterministic water wave models. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in comparison with experimental measurements could be partially explained...

  10. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  11. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    Science.gov (United States)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise

  12. Decay constants of heavy mesons in the relativistic potential model with velocity dependent corrections

    International Nuclear Information System (INIS)

    Avaliani, I.S.; Sisakyan, A.N.; Slepchenko, L.A.

    1992-01-01

    In the relativistic model with the velocity dependent potential the masses and leptonic decay constants of heavy pseudoscalar and vector mesons are computed. The possibility of using this potential is discussed. 11 refs.; 4 tabs

  13. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  14. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  15. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  17. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  18. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  19. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  20. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  1. Persistence and ergodicity of plant disease model with markov conversion and impulsive toxicant input

    Science.gov (United States)

    Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.

  2. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  3. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  4. The Canadian Defence Input-Output Model DIO Version 4.41

    Science.gov (United States)

    2011-09-01

    Request to develop DND tailored Input/Output Model. Electronic communication from AllenWeldon to Team Leader, Defence Economics Team onMarch 12, 2011...and similar contain- ers 166 1440 Handbags, wallets and similar personal articles such as eyeglass and cigar cases and coin purses 167 1450 Cotton yarn...408 3600 Radar and radio navigation equipment 409 3619 Semi-conductors 410 3621 Printed circuits 411 3622 Integrated circuits 412 3623 Other electronic

  5. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    Science.gov (United States)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  6. Development of an Input Suite for an Orthotropic Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  7. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  8. Modeling non-Fickian dispersion by use of the velocity PDF on the pore scale

    Science.gov (United States)

    Kooshapur, Sheema; Manhart, Michael

    2015-04-01

    For obtaining a description of reactive flows in porous media, apart from the geometrical complications of resolving the velocities and scalar values, one has to deal with the additional reactive term in the transport equation. An accurate description of the interface of the reacting fluids - which is strongly influenced by dispersion- is essential for resolving this term. In REV-based simulations the reactive term needs to be modeled taking sub-REV fluctuations and possibly non-Fickian dispersion into account. Non-Fickian dispersion has been observed in strongly heterogeneous domains and in early phases of transport. A fully resolved solution of the Navier-Stokes and transport equations which yields a detailed description of the flow properties, dispersion, interfaces of fluids, etc. however, is not practical for domains containing more than a few thousand grains, due to the huge computational effort required. Through Probability Density Function (PDF) based methods, the velocity distribution in the pore space can facilitate the understanding and modelling of non-Fickian dispersion [1,2]. Our aim is to model the transition between non-Fickian and Fickian dispersion in a random sphere pack within the framework of a PDF based transport model proposed by Meyer and Tchelepi [1,3]. They proposed a stochastic transport model where velocity components of tracer particles are represented by a continuous Markovian stochastic process. In addition to [3], we consider the effects of pore scale diffusion and formulate a different stochastic equation for the increments in velocity space from first principles. To assess the terms in this equation, we performed Direct Numerical Simulations (DNS) for solving the Navier-Stokes equation on a random sphere pack. We extracted the PDFs and statistical moments (up to the 4th moment) of the stream-wise velocity, u, and first and second order velocity derivatives both independent and conditioned on velocity. By using this data and

  9. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  10. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  11. Axial flow velocity patterns in a normal human pulmonary artery model: pulsatile in vitro studies.

    Science.gov (United States)

    Sung, H W; Yoganathan, A P

    1990-01-01

    It has been clinically observed that the flow velocity patterns in the pulmonary artery are directly modified by disease. The present study addresses the hypothesis that altered velocity patterns relate to the severity of various diseases in the pulmonary artery. This paper lays a foundation for that analysis by providing a detailed description of flow velocity patterns in the normal pulmonary artery, using flow visualization and laser Doppler anemometry techniques. The studies were conducted in an in vitro rigid model in a right heart pulse duplicator system. In the main pulmonary artery, a broad central flow field was observed throughout systole. The maximum axial velocity (150 cm s-1) was measured at peak systole. In the left pulmonary artery, the axial velocities were approximately evenly distributed in the perpendicular plane. However, in the bifurcation plane, they were slightly skewed toward the inner wall at peak systole and during the deceleration phase. In the right pulmonary artery, the axial velocity in the perpendicular plane had a very marked M-shaped profile at peak systole and during the deceleration phase, due to a pair of strong secondary flows. In the bifurcation plane, higher axial velocities were observed along the inner wall, while lower axial velocities were observed along the outer wall and in the center. Overall, relatively low levels of turbulence were observed in all the branches during systole. The maximum turbulence intensity measured was at the boundary of the broad central flow field in the main pulmonary artery at peak systole.

  12. Mathematical Modeling for Energy Dissipation Behavior of Velocity ...

    African Journals Online (AJOL)

    The developed oil-pressure damper is installed with an additional Relief Valve parallel to the Throttle Valve. This is intended to obtain an adaptive control by changing the damping coefficient of this damper using changeable orifice size. In order to simulate its actual energy-dissipating behavior, a serial friction model and a ...

  13. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  14. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...... compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...

  15. Quantum Gravity and Maximum Attainable Velocities in the Standard Model

    International Nuclear Information System (INIS)

    Alfaro, Jorge

    2007-01-01

    A main difficulty in the quantization of the gravitational field is the lack of experiments that discriminate among the theories proposed to quantize gravity. Recently we showed that the Standard Model(SM) itself contains tiny Lorentz invariance violation(LIV) terms coming from QG. All terms depend on one arbitrary parameter α that set the scale of QG effects. In this talk we review the LIV for mesons nucleons and leptons and apply it to study several effects, including the GZK anomaly

  16. Mathematical model for logarithmic scaling of velocity fluctuations in wall turbulence.

    Science.gov (United States)

    Mouri, Hideaki

    2015-12-01

    For wall turbulence, moments of velocity fluctuations are known to be logarithmic functions of the height from the wall. This logarithmic scaling is due to the existence of a characteristic velocity and to the nonexistence of any characteristic height in the range of the scaling. By using the mathematics of random variables, we obtain its necessary and sufficient conditions. They are compared with characteristics of a phenomenological model of eddies attached to the wall and also with those of the logarithmic scaling of the mean velocity.

  17. Measurement of velocity deficit at the downstream of a 1:10 axial hydrokinetic turbine model

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, Budi [ORNL; Neary, Vincent S [ORNL; Hill, Craig [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414; Chamorro, Leonardo [St. Anthony Falls Laboratory, 2 Third Avenue SE, Minneapolis, MN 55414

    2012-01-01

    Wake recovery constrains the downstream spacing and density of turbines that can be deployed in turbine farms and limits the amount of energy that can be produced at a hydrokinetic energy site. This study investigates the wake recovery at the downstream of a 1:10 axial flow turbine model using a pulse-to-pulse coherent Acoustic Doppler Profiler (ADP). In addition, turbine inflow and outflow velocities were measured for calculating the thrust on the turbine. The result shows that the depth-averaged longitudinal velocity recovers to 97% of the inflow velocity at 35 turbine diameter (D) downstream of the turbine.

  18. Lane-changing behavior and its effect on energy dissipation using full velocity difference model

    Science.gov (United States)

    Wang, Jian; Ding, Jian-Xun; Shi, Qin; Kühne, Reinhart D.

    2016-07-01

    In real urban traffic, roadways are usually multilane with lane-specific velocity limits. Most previous researches are derived from single-lane car-following theory which in the past years has been extensively investigated and applied. In this paper, we extend the continuous single-lane car-following model (full velocity difference model) to simulate the three-lane-changing behavior on an urban roadway which consists of three lanes. To meet incentive and security requirements, a comprehensive lane-changing rule set is constructed, taking safety distance and velocity difference into consideration and setting lane-specific speed restriction for each lane. We also investigate the effect of lane-changing behavior on distribution of cars, velocity, headway, fundamental diagram of traffic and energy dissipation. Simulation results have demonstrated asymmetric lane-changing “attraction” on changeable lane-specific speed-limited roadway, which leads to dramatically increasing energy dissipation.

  19. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  20. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  2. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  3. Mathematical modeling of groundwater contamination with varying velocity field

    Directory of Open Access Journals (Sweden)

    Das Pintu

    2017-06-01

    Full Text Available In this study, analytical models for predicting groundwater contamination in isotropic and homogeneous porous formations are derived. The impact of dispersion and diffusion coefficients is included in the solution of the advection-dispersion equation (ADE, subjected to transient (time-dependent boundary conditions at the origin. A retardation factor and zero-order production terms are included in the ADE. Analytical solutions are obtained using the Laplace Integral Transform Technique (LITT and the concept of linear isotherm. For illustration, analytical solutions for linearly space- and time-dependent hydrodynamic dispersion coefficients along with molecular diffusion coefficients are presented. Analytical solutions are explored for the Peclet number. Numerical solutions are obtained by explicit finite difference methods and are compared with analytical solutions. Numerical results are analysed for different types of geological porous formations i.e., aquifer and aquitard. The accuracy of results is evaluated by the root mean square error (RMSE.

  4. VSC Input-Admittance Modeling and Analysis Above the Nyquist Frequency for Passivity-Based Stability Assessment

    DEFF Research Database (Denmark)

    Harnefors, Lennart; Finger, Raphael; Wang, Xiongfei

    2017-01-01

    The interconnection stability of a gridconnected voltage-source converter (VSC) can be assessed via the dissipative properties of its input admittance. In this paper, the modeling of the current control loop is revisited with the aim to improve the accuracy of the input-admittance model above...

  5. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P

    2014-01-01

    for all models. Further analysis revealed that the small influence of spatial resolution of soil input data might be related to: (a) the high precipitation amount in the region which partly masked differences in soil characteristics for water holding capacity, (b) the loss of variability in hydraulic soil...... properties due to the methods applied to calculate water retention properties of the used soil profiles, and (c) the method of soil data aggregation. No characteristic “fingerprint” between sites, years and resolutions could be found for any of the models. Our results support earlier recommendation....... In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo...

  6. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.

    2010-01-01

    Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we...... investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...

  7. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  8. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  9. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  10. A math model for high velocity sensoring with a focal plane shuttered camera.

    Science.gov (United States)

    Morgan, P.

    1971-01-01

    A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.

  11. Developing a Crustal and Upper Mantle Velocity Model for the Brazilian Northeast

    Science.gov (United States)

    Julia, J.; Nascimento, R.

    2013-05-01

    Development of 3D models for the earth's crust and upper mantle is important for accurately predicting travel times for regional phases and to improve seismic event location. The Brazilian Northeast is a tectonically active area within stable South America and displays one of the highest levels of seismicity in Brazil, with earthquake swarms containing events up to mb 5.2. Since 2011, seismic activity is routinely monitored through the Rede Sismográfica do Nordeste (RSisNE), a permanent network supported by the national oil company PETROBRAS and consisting of 15 broadband stations with an average spacing of ~200 km. Accurate event locations are required to correctly characterize and identify seismogenic areas in the region and assess seismic hazard. Yet, no 3D model of crustal thickness and crustal and upper mantle velocity variation exists. The first step in developing such models is to refine crustal thickness and depths to major seismic velocity boundaries in the crust and improve on seismic velocity estimates for the upper mantle and crustal layers. We present recent results in crustal and uppermost mantle structure in NE Brazil that will contribute to the development of a 3D model of velocity variation. Our approach has consisted of: (i) computing receiver functions to obtain point estimates of crustal thickness and Vp/Vs ratio and (ii) jointly inverting receiver functions and surface-wave dispersion velocities from an independent tomography study to obtain S-velocity profiles at each station. This approach has been used at all the broadband stations of the monitoring network plus 15 temporary, short-period stations that reduced the inter-station spacing to ~100 km. We expect our contributions will provide the basis to produce full 3D velocity models for the Brazilian Northeast and help determine accurate locations for seismic events in the region.

  12. Detection of no-model input-output pairs in closed-loop systems.

    Science.gov (United States)

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Modelling Velocity Spectra in the Lower Part of the Planetary Boundary Layer

    DEFF Research Database (Denmark)

    Olesen, H.R.; Larsen, Søren Ejling; Højstrup, Jørgen

    1984-01-01

    of the planetary boundary layer. Knowledge of the variation with stability of the (reduced) frequency f, for the spectral maximum is utilized in this modelling. Stable spectra may be normalized so that they adhere to one curve only, irrespective of stability, and unstable w-spectra may also be normalized to fit...... one curve. The problem of using filtered velocity variances when modelling spectra is discussed. A simplified procedure to provide a first estimate of the filter effect is given. In stable, horizontal velocity spectra, there is often a ‘gap’ at low frequencies. Using dimensional considerations...... and the spectral model previously derived, an expression for the gap frequency is found....

  14. One-dimensional velocity model of the Middle Kura Depresion from local earthquakes data of Azerbaijan

    Science.gov (United States)

    Yetirmishli, G. C.; Kazimova, S. E.; Kazimov, I. E.

    2011-09-01

    We present the method for determining the velocity model of the Earth's crust and the parameters of earthquakes in the Middle Kura Depression from the data of network telemetry in Azerbaijan. Application of this method allowed us to recalculate the main parameters of the hypocenters of the earthquake, to compute the corrections to the arrival times of P and S waves at the observation station, and to significantly improve the accuracy in determining the coordinates of the earthquakes. The model was constructed using the VELEST program, which calculates one-dimensional minimal velocity models from the travel times of seismic waves.

  15. Input-output model of regional environmental and economic impacts of nuclear power plants

    International Nuclear Information System (INIS)

    Johnson, M.H.; Bennett, J.T.

    1979-01-01

    The costs of delayed licensing of nuclear power plants calls for a more-comprehensive method of quantifying the economic and environmental impacts on a region. A traditional input-output (I-O) analysis approach is extended to assess the effects of changes in output, income, employment, pollution, water consumption, and the costs and revenues of local government disaggregated among 23 industry sectors during the construction and operating phases. Unlike earlier studies, this model uses nonlinear environmental interactions and specifies environmental feedbacks to the economic sector. 20 references

  16. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  17. Low-level waste shallow land disposal source term model: Data input guides

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Suen, C.J.

    1989-07-01

    This report provides an input guide for the computational models developed to predict the rate of radionuclide release from shallow land disposal of low-level waste. Release of contaminants depends on four processes: water flow, container degradation, waste from leaching, and contaminant transport. The computer code FEMWATER has been selected to predict the movement of water in an unsaturated porous media. The computer code BLT (Breach, Leach, and Transport), a modification of FEMWASTE, has been selected to predict the processes of container degradation (Breach), contaminant release from the waste form (Leach), and contaminant migration (Transport). In conjunction, these two codes have the capability to account for the effects of disposal geometry, unsaturated/water flow, container degradation, waste form leaching, and migration of contaminants releases within a single disposal trench. In addition to the input requirements, this report presents the fundamental equations and relationships used to model the four different processes previously discussed. Further, the appendices provide a representative sample of data required by the different models. 14 figs., 27 tabs

  18. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  19. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

  20. Animal models of surgically manipulated flow velocities to study shear stress-induced atherosclerosis.

    Science.gov (United States)

    Winkel, Leah C; Hoogendoorn, Ayla; Xing, Ruoyu; Wentzel, Jolanda J; Van der Heiden, Kim

    2015-07-01

    Atherosclerosis is a chronic inflammatory disease of the arterial tree that develops at predisposed sites, coinciding with locations that are exposed to low or oscillating shear stress. Manipulating flow velocity, and concomitantly shear stress, has proven adequate to promote endothelial activation and subsequent plaque formation in animals. In this article, we will give an overview of the animal models that have been designed to study the causal relationship between shear stress and atherosclerosis by surgically manipulating blood flow velocity profiles. These surgically manipulated models include arteriovenous fistulas, vascular grafts, arterial ligation, and perivascular devices. We review these models of manipulated blood flow velocity from an engineering and biological perspective, focusing on the shear stress profiles they induce and the vascular pathology that is observed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  2. Simultaneous inversion for hypocenters and lateral velocity variation: An iterative solution with a layered model

    Energy Technology Data Exchange (ETDEWEB)

    Hawley, B.W.; Zandt, G.; Smith, R.B.

    1981-08-10

    An iterative inversion technique has been developed that uses the direct P and S wave arrival times from local earthquakes to compute simultaneously a three-dimensional velocity structure and relocated hypocenters. Crustal structure is modeled by subdiving flat layers into rectangular blocks. An interpolation function is used to smoothly vary velocities between blocks, allowing ray trace calculations of travel times in a three-dimensional medium. Tests using synthetic data from known models show that solutions are reasonably independent of block size and spatial distribution but are sensitive to the choice of layer thicknesses. Application of the technique to observed earthquake data from north-central Utah shown the following: (1) lateral velcoity variations in the crust as large as 7% occur over 30-km distance, (2) earthquake epicenters computed with the three-dimensional velocity structure were shifted an average of 3.0 km from location determined assuming homogeneous flat layered models, and (3) the laterally varying velocity structure correlates with anomalous variations in the local gravity and aeromagnetic fields, suggesting that the new velocity information can be valuable in acquiring a better understanding of crustal structure.

  3. Calculation of pressure gradients from MR velocity data in a laminar flow model

    International Nuclear Information System (INIS)

    Adler, R.S.; Chenevert, T.L.; Fowlkes, J.B.; Pipe, J.G.; Rubin, J.M.

    1990-01-01

    This paper reports on the ability of current imaging modalities to provide velocity-distribution data that offers the possibility of noninvasive pressure-gradient determination from an appropriate rheologic model of flow. A simple laminar flow model is considered at low Reynolds number, RE calc = 0.59 + (1.13 x (dp/dz) meas ), R 2 = .994, in units of dyne/cm 2 /cm for the range of flows considered. The authors' results indicate the potential usefulness of noninvasive pressure-gradient determinations from quantitative analysis of imaging-derived velocity data

  4. Finite-Source Inversion for the 2004 Parkfield Earthquake using 3D Velocity Model Green's Functions

    Science.gov (United States)

    Kim, A.; Dreger, D.; Larsen, S.

    2008-12-01

    We determine finite fault models of the 2004 Parkfield earthquake using 3D Green's functions. Because of the dense station coverage and detailed 3D velocity structure model in this region, this earthquake provides an excellent opportunity to examine how the 3D velocity structure affects the finite fault inverse solutions. Various studies (e.g. Michaels and Eberhart-Phillips, 1991; Thurber et al., 2006) indicate that there is a pronounced velocity contrast across the San Andreas Fault along the Parkfield segment. Also the fault zone at Parkfield is wide as evidenced by mapped surface faults and where surface slip and creep occurred in the 1966 and the 2004 Parkfield earthquakes. For high resolution images of the rupture process"Ait is necessary to include the accurate 3D velocity structure for the finite source inversion. Liu and Aurchuleta (2004) performed finite fault inversions using both 1D and 3D Green's functions for 1989 Loma Prieta earthquake using the same source paramerization and data but different Green's functions and found that the models were quite different. This indicates that the choice of the velocity model significantly affects the waveform modeling at near-fault stations. In this study, we used the P-wave velocity model developed by Thurber et al (2006) to construct the 3D Green's functions. P-wave speeds are converted to S-wave speeds and density using by the empirical relationships of Brocher (2005). Using a finite difference method, E3D (Larsen and Schultz, 1995), we computed the 3D Green's functions numerically by inserting body forces at each station. Using reciprocity, these Green's functions are recombined to represent the ground motion at each station due to the slip on the fault plane. First we modeled the waveforms of small earthquakes to validate the 3D velocity model and the reciprocity of the Green"fs function. In the numerical tests we found that the 3D velocity model predicted the individual phases well at frequencies lower than 0

  5. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  6. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  7. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  8. Should tsunami models use a nonzero initial condition for horizontal velocity?

    Science.gov (United States)

    Nava, G.; Lotto, G. C.; Dunham, E. M.

    2017-12-01

    Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require two initial conditions: one on sea surface height and another on depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). We run several full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor, using both idealized structures and a more realistic Tohoku structure. Substantial horizontal momentum is imparted to the ocean, but almost all momentum is carried away in the form of ocean acoustic waves. We compare tsunami propagation in each full-physics simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial conditions. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves (from ocean acoustic and seismic waves) at some final time, and backpropagating the tsunami

  9. Model analysis of riparian buffer effectiveness for reducing nutrient inputs to streams in agricultural landscapes

    Science.gov (United States)

    McKane, R. B.; M, S.; F, P.; Kwiatkowski, B. L.; Rastetter, E. B.

    2006-12-01

    Federal and state agencies responsible for protecting water quality rely mainly on statistically-based methods to assess and manage risks to the nation's streams, lakes and estuaries. Although statistical approaches provide valuable information on current trends in water quality, process-based simulation models are essential for understanding and forecasting how changes in human activities across complex landscapes impact the transport of nutrients and contaminants to surface waters. To address this need, we developed a broadly applicable, process-based watershed simulator that links a spatially-explicit hydrologic model and a terrestrial biogeochemistry model (MEL). See Stieglitz et al. and Pan et al., this meeting, for details on the design and verification of this simulator. Here we apply the watershed simulator to a generalized agricultural setting to demonstrate its potential for informing policy and management decisions concerning water quality. This demonstration specifically explores the effectiveness of riparian buffers for reducing the transport of nitrogenous fertilizers from agricultural fields to streams. The interaction of hydrologic and biogeochemical processes represented in our simulator allows several important questions to be addressed. (1) For a range of upland fertilization rates, to what extent do riparian buffers reduce nitrogen inputs to streams? (2) How does buffer effectiveness change over time as the plant-soil system approaches N-saturation? (3) How can buffers be managed to increase their effectiveness, e.g., through periodic harvest and replanting? The model results illustrate that, while the answers to these questions depend to some extent on site factors (climatic regime, soil properties and vegetation type), in all cases riparian buffers have a limited capacity to reduce nitrogen inputs to streams where fertilization rates approach those typically used for intensive agriculture (e.g., 200 kg N per ha per year for corn in the U

  10. Modeling continuous seismic velocity changes due to ground shaking in Chile

    Science.gov (United States)

    Gassenmeier, Martina; Richter, Tom; Sens-Schönfelder, Christoph; Korn, Michael; Tilmann, Frederik

    2015-04-01

    In order to investigate temporal seismic velocity changes due to earthquake related processes and environmental forcing, we analyze 8 years of ambient seismic noise recorded by the Integrated Plate Boundary Observatory Chile (IPOC) network in northern Chile between 18° and 25° S. The Mw 7.7 Tocopilla earthquake in 2007 and the Mw 8.1 Iquique earthquake in 2014 as well as numerous smaller events occurred in this area. By autocorrelation of the ambient seismic noise field, approximations of the Green's functions are retrieved. The recovered function represents backscattered or multiply scattered energy from the immediate neighborhood of the station. To detect relative changes of the seismic velocities we apply the stretching method, which compares individual autocorrelation functions to stretched or compressed versions of a long term averaged reference autocorrelation function. We use time windows in the coda of the autocorrelations, that contain scattered waves which are highly sensitive to minute changes in the velocity. At station PATCX we observe seasonal changes in seismic velocity as well as temporary velocity reductions in the frequency range of 4-6 Hz. The seasonal changes can be attributed to thermal stress changes in the subsurface related to variations of the atmospheric temperature. This effect can be modeled well by a sine curve and is subtracted for further analysis of short term variations. Temporary velocity reductions occur at the time of ground shaking usually caused by earthquakes and are followed by a recovery. We present an empirical model that describes the seismic velocity variations based on continuous observations of the local ground acceleration. Our hypothesis is that not only the shaking of earthquakes provokes velocity drops, but any small vibrations continuously induce minor velocity variations that are immediately compensated by healing in the steady state. We show that the shaking effect is accumulated over time and best described by

  11. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  12. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  13. Agradient velocity, vortical motion and gravity waves in a rotating shallow-water model

    Science.gov (United States)

    Sutyrin Georgi, G.

    2004-07-01

    A new approach to modelling slow vortical motion and fast inertia-gravity waves is suggested within the rotating shallow-water primitive equations with arbitrary topography. The velocity is exactly expressed as a sum of the gradient wind, described by the Bernoulli function,B, and the remaining agradient part, proportional to the velocity tendency. Then the equation for inverse potential vorticity,Q, as well as momentum equations for agradient velocity include the same source of intrinsic flow evolution expressed as a single term J (B, Q), where J is the Jacobian operator (for any steady state J (B, Q) = 0). Two components of agradient velocity are responsible for the fast inertia-gravity wave propagation similar to the traditionally used divergence and ageostrophic vorticity. This approach allows for the construction of balance relations for vortical dynamics and potential vorticity inversion schemes even for moderate Rossby and Froude numbers assuming the characteristic value of |J(B, Q)| = to be small. The components of agradient velocity are used as the fast variables slaved to potential vorticity that allows for diagnostic estimates of the velocity tendency, the direct potential vorticity inversion with the accuracy of 2 and the corresponding potential vorticity-conserving agradient velocity balance model (AVBM). The ultimate limitations of constructing the balance are revealed in the form of the ellipticity condition for balanced tendency of the Bernoulli function which incorporates both known criteria of the formal stability: the gradient wind modified by the characteristic vortical Rossby wave phase speed should be subcritical. The accuracy of the AVBM is illustrated by considering the linear normal modes and coastal Kelvin waves in the f-plane channel with topography.

  14. A California statewide three-dimensional seismic velocity model from both absolute and differential times

    Science.gov (United States)

    Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.

    2010-01-01

    We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.

  15. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  16. A model of the instantaneous pressure-velocity relationships of the neonatal cerebral circulation.

    Science.gov (United States)

    Panerai, R B; Coughtrey, H; Rennie, J M; Evans, D H

    1993-11-01

    The instantaneous relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), measured with Doppler ultrasound in the anterior cerebral artery, is represented by a vascular waterfall model comprising vascular resistance, compliance, and critical closing pressure. One min recordings obtained from 61 low birth weight newborns were fitted to the model using a least-squares procedures with correction for the time delay between the BP and CBFV signals. A sensitivity analysis was performed to study the effects of low-pass filtering (LPF), cutoff frequency, and noise on the estimated parameters of the model. Results indicate excellent fitting of the model (F-test, p model parameters have a mean correlation coefficient of 0.94 with the measured flow velocity tracing (N = 232 epochs). The model developed can be useful for interpreting clinical findings and as a framework for research into cerebral autoregulation.

  17. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  18. New Models for Velocity/Pressure-Gradient Correlations in Turbulent Boundary Layers

    Science.gov (United States)

    Poroseva, Svetlana; Murman, Scott

    2014-11-01

    To improve the performance of Reynolds-Averaged Navier-Stokes (RANS) turbulence models, one has to improve the accuracy of models for three physical processes: turbulent diffusion, interaction of turbulent pressure and velocity fluctuation fields, and dissipative processes. The accuracy of modeling the turbulent diffusion depends on the order of a statistical closure chosen as a basis for a RANS model. When the Gram-Charlier series expansions for the velocity correlations are used to close the set of RANS equations, no assumption on Gaussian turbulence is invoked and no unknown model coefficients are introduced into the modeled equations. In such a way, this closure procedure reduces the modeling uncertainty of fourth-order RANS (FORANS) closures. Experimental and direct numerical simulation data confirmed the validity of using the Gram-Charlier series expansions in various flows including boundary layers. We will address modeling the velocity/pressure-gradient correlations. New linear models will be introduced for the second- and higher-order correlations applicable to two-dimensional incompressible wall-bounded flows. Results of models' validation with DNS data in a channel flow and in a zero-pressure gradient boundary layer over a flat plate will be demonstrated. A part of the material is based upon work supported by NASA under award NNX12AJ61A.

  19. Velocity Model Analysis Based on Integrated Well and Seismic Data of East Java Basin

    Science.gov (United States)

    Mubin, Fathul; Widya, Aviandy; Eka Nurcahya, Budi; Nurul Mahmudah, Erma; Purwaman, Indro; Radityo, Aryo; Shirly, Agung; Nurwani, Citra

    2018-03-01

    Time to depth conversion is an important processof seismic interpretationtoidentify hydrocarbonprospectivity. Main objectives of this research are to minimize the risk of error in geometry and time to depth conversion. Since it’s using a large amount of data and had been doing in the large scale of research areas, this research can be classified as a regional scale research. The research was focused on three horizons time interpretation: Top Kujung I, Top Ngimbang and Basement which located in the offshore and onshore areas of east Java basin. These three horizons was selected because they were assumed to be equivalent to the rock formation, which is it has always been the main objective of oil and gas exploration in the East Java Basin. As additional value, there was no previous works on velocity modeling for regional scale using geological parameters in East Java basin. Lithology and interval thickness were identified as geological factors that effected the velocity distribution in East Java Basin. Therefore, a three layer geological model was generated, which was defined by the type of lithology; carbonate (layer 1: Top Kujung I), shale (layer 2: Top Ngimbang) and Basement. A statistical method using three horizons is able to predict the velocity distribution on sparse well data in a regional scale. The average velocity range for Top Kujung I is 400 m/s - 6000 m/s, Top Ngimbang is 500 m/s - 8200 m/s and Basement is 600 m/s - 8000 m/s. Some velocity anomalies found in Madura sub-basin area, caused by geological factor which identified as thick shale deposit and high density values on shale. Result of velocity and depth modeling analysis can be used to define the volume range deterministically and to make geological models to prospect generation in details by geological concept.

  20. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  1. Milgrom Relation Models for Spiral Galaxies from Two-Dimensional Velocity Maps

    OpenAIRE

    Barnes, Eric I.; Kosowsky, Arthur; Sellwood, Jerry A.

    2007-01-01

    Using two-dimensional velocity maps and I-band photometry, we have created mass models of 40 spiral galaxies using the Milgrom relation (the basis of modified Newtonian dynamics, or MOND) to complement previous work. A Bayesian technique is employed to compare several different dark matter halo models to Milgrom and Newtonian models. Pseudo-isothermal dark matter halos provide the best statistical fits to the data in a majority of cases, while the Milgrom relation generally provides good fits...

  2. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking.

    Science.gov (United States)

    Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy

    2015-07-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling

  3. Effects of equilibrium point displacement in limit cycle oscillation amplitude, critical frequency and prediction of critical input angular velocity in minimal brake system

    Science.gov (United States)

    Ganji, Hamed Faghanpour; Ganji, Davood Domiri

    2017-04-01

    In the present paper, brake squeal phenomenon as a noise resource in automobiles was studied. In most cases, the modeling work is carried out assuming that deformations were small; thus, equilibrium point is set zero and linearization is performed at this point. However, the equilibrium point under certain circumstances is not zero; therefore, huge errors in prediction of brake squeal may occur. In this work, large motion domains with respect to linearization importance were subjected to investigation. Nonlinear equations of motion were considered and behavior of system for COF's model was analyzed by studying amplitude and frequency of limited cycle oscillation.

  4. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  5. A First Layered Crustal Velocity Model for the Western Solomon Islands: Inversion of Measured Group Velocity of Surface Waves using Ambient Noise Cross-Correlation

    Science.gov (United States)

    Ku, C. S.; Kuo, Y. T.; Chao, W. A.; You, S. H.; Huang, B. S.; Chen, Y. G.; Taylor, F. W.; Yih-Min, W.

    2017-12-01

    Two earthquakes, MW 8.1 in 2007 and MW 7.1 in 2010, hit the Western Province of Solomon Islands and caused extensive damage, but motivated us to set up the first seismic network in this area. During the first phase, eight broadband seismic stations (BBS) were installed around the rupture zone of 2007 earthquake. With one-year seismic records, we cross-correlated the vertical component of ambient noise recorded in our BBS and calculated Rayleigh-wave group velocity dispersion curves on inter-station paths. The genetic algorithm to invert one-dimensional crustal velocity model is applied by fitting the averaged dispersion curves. The one-dimensional crustal velocity model is constituted by two layers and one half-space, representing the upper crust, lower crust, and uppermost mantle respectively. The resulted thickness values of the upper and lower crust are 6.4 and 14.2 km, respectively. Shear-wave velocities (VS) of the upper crust, lower crust, and uppermost mantle are 2.53, 3.57 and 4.23 km/s with the VP/VS ratios of 1.737, 1.742 and 1.759, respectively. This first layered crustal velocity model can be used as a preliminary reference to further study seismic sources such as earthquake activity and tectonic tremor.

  6. An investigation of FLUENT's fan model including the effect of swirl velocity

    International Nuclear Information System (INIS)

    El Saheli, A.; Barron, R.M.

    2002-01-01

    The purpose of this paper is to investigate and discuss the reliability of simplified models for the computational fluid dynamics (CFD) simulation of air flow through automotive engine cooling fans. One of the most widely used simplified fan models in industry is a variant of the actuator disk model which is available in most commercial CFD software, such as FLUENT. In this model, the fan is replaced by an infinitely thin surface on which pressure rise across the fan is specified as a polynomial function of normal velocity or flow rate. The advantages of this model are that it is simple, it accurately predicts the pressure rise through the fan and the axial velocity, and it is robust

  7. Prediction of Compressional Wave Velocity Using Regression and Neural Network Modeling and Estimation of Stress Orientation in Bokaro Coalfield, India

    Science.gov (United States)

    Paul, Suman; Ali, Muhammad; Chatterjee, Rima

    2018-01-01

    Velocity of compressional wave ( V P) of coal and non-coal lithology is predicted from five wells from the Bokaro coalfield (CF), India. Shear sonic travel time logs are not recorded for all wells under the study area. Shear wave velocity ( Vs) is available only for two wells: one from east and other from west Bokaro CF. The major lithologies of this CF are dominated by coal, shaly coal of Barakar formation. This paper focuses on the (a) relationship between Vp and Vs, (b) prediction of Vp using regression and neural network modeling and (c) estimation of maximum horizontal stress from image log. Coal characterizes with low acoustic impedance (AI) as compared to the overlying and underlying strata. The cross-plot between AI and Vp/ Vs is able to identify coal, shaly coal, shale and sandstone from wells in Bokaro CF. The relationship between Vp and Vs is obtained with excellent goodness of fit ( R 2) ranging from 0.90 to 0.93. Linear multiple regression and multi-layered feed-forward neural network (MLFN) models are developed for prediction Vp from two wells using four input log parameters: gamma ray, resistivity, bulk density and neutron porosity. Regression model predicted Vp shows poor fit (from R 2 = 0.28) to good fit ( R 2 = 0.79) with the observed velocity. MLFN model predicted Vp indicates satisfactory to good R2 values varying from 0.62 to 0.92 with the observed velocity. Maximum horizontal stress orientation from a well at west Bokaro CF is studied from Formation Micro-Imager (FMI) log. Breakouts and drilling-induced fractures (DIFs) are identified from the FMI log. Breakout length of 4.5 m is oriented towards N60°W whereas the orientation of DIFs for a cumulative length of 26.5 m is varying from N15°E to N35°E. The mean maximum horizontal stress in this CF is towards N28°E.

  8. One kind of atmosphere-ocean three layer model for calculating the velocity of ocean current

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Z; Xi, P

    1979-10-01

    A three-layer atmosphere-ocean model is given in this paper to calcuate the velocity of ocean current, particularly the function of the vertical coordinate, taking into consideratiln (1) the atmospheric effect on the generation of ocean current, (2) a calculated coefficient of the eddy viscosity instead of an assumed one, and (3) the sea which actually varies in depth.

  9. Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter

    NARCIS (Netherlands)

    Alavi, A.; Parrinello, M.; Frenkel, D.

    1995-01-01

    First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of

  10. Do Assimilated Drifter Velocities Improve Lagrangian Predictability in an Operational Ocean Model?

    Science.gov (United States)

    2015-05-01

    extended Kalman filter . Molcard et al. (2005) used a statistical method to cor- relate model and drifter velocities. Taillandier et al. (2006) describe the... temperature and salinity observations. Trajectory angular differ- ences are also reduced. 1. Introduction The importance of Lagrangian forecasts was seen... Temperature , salinity, and sea surface height (SSH, measured along-track by satellite altimeters) observa- tions are typically assimilated in

  11. Analytical models for predicting the ion velocity distributions in JET in the presence of ICRF heating

    International Nuclear Information System (INIS)

    Anderson, A.; Eriksson, L.G.; Lisak, M.

    1986-01-01

    The present report summarizes the work performed within the contract JT4/9008, the aim of which is to derive analytical models for ion velocity distributions resulting from ICRF heating on JET. The work has been performed over a two-year-period ending in August 1986 and has involved a total effort of 2.4 man years. (author)

  12. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boyd, J [Cardiovascular Research Group Physics, University of New England, Armidale, NSW 2351 (Australia); Buick, J M [Department of Mechanical and Design Engineering, University of Portsmouth, Anglesea Building, Anglesea Road, Portsmouth PO1 3DJ (United Kingdom)

    2008-10-21

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  13. Three-dimensional modelling of the human carotid artery using the lattice Boltzmann method: I. Model and velocity analysis

    International Nuclear Information System (INIS)

    Boyd, J; Buick, J M

    2008-01-01

    Numerical modelling is a powerful tool in the investigation of human blood flow and arterial diseases such as atherosclerosis. It is known that near wall velocity and shear are important in the pathogenesis and progression of atherosclerosis. In this paper results for a simulation of blood flow in a three-dimensional carotid artery geometry using the lattice Boltzmann method are presented. The velocity fields in the body of the fluid are analysed at six times of interest during a physiologically accurate velocity waveform. It is found that the three-dimensional model agrees well with previous literature results for carotid artery flow. Regions of low near wall velocity and circulatory flow are observed near the outer wall of the bifurcation and in the lower regions of the external carotid artery, which are regions that are typically prone to atherosclerosis.

  14. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  15. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  16. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  17. Solar Load Inputs for USARIEM Thermal Strain Models and the Solar Radiation-Sensitive Components of the WBGT Index

    National Research Council Canada - National Science Library

    Matthew, William

    2001-01-01

    This report describes processes we have implemented to use global pyranometer-based estimates of mean radiant temperature as the common solar load input for the Scenario model, the USARIEM heat strain...

  18. 3D Crustal Velocity Structure Model of the Middle-eastern North China Craton

    Science.gov (United States)

    Duan, Y.; Wang, F.; Lin, J.; Wei, Y.

    2017-12-01

    Lithosphere thinning and destruction in the middle-eastern North China Craton (NCC), a region susceptible to strong earthquakes, is one of the research hotspots in solid earth science. Up to 42 wide-angle reflection/refraction deep seismic sounding (DSS) profiles have been completed in the middle-eastern NCC, we collect all the 2D profiling results and perform gridding of the velocity and interface depth data, and build a 3D crustal velocity structure model for the middle-eastern NCC, named HBCrust1.0, using the Kriging interpolation method. In this model, four layers are divided by three interfaces: G is the interface between the sedimentary cover and crystalline crust, with velocities of 5.0-5.5 km/s above and 5.8-6.0 km/s below. C is the interface of the upper and lower crust, with velocity jump from 6.2-6.4 km/s to 6.5-6.6 km/s. M is the interface between the crust and upper mantle, with velocity 6.7-7.0 km/s at the crust bottom and 7.9-8.0 km/s on mantle top. Our results show that the first arrival time calculated from HBCust1.0 fit well with the observation. It also demonstrates that the upper crust is the main seismogenic layer, and the brittle-ductile transition occurs at depths near interface C. The depth of interface Moho varies beneath the source area of the Tangshan earth-quake, and a low-velocity structure is found to extend from the source area to the lower crust. Based on these observations, it can be inferred that stress accumulation responsible for the Tangshan earthquake may have been closely related to the migration and deformation of the mantle materials. Comparisons of the average velocities of the whole crust, the upper and the lower crust show that the average velocity of the lower crust under the central part of the North China Basin (NCB) in the east of the craton is obviously higher than the regional average, this high-velocity probably results from longterm underplating of the mantle magma. This research is founded by the Natural Science

  19. Assessment of NASA's Physiographic and Meteorological Datasets as Input to HSPF and SWAT Hydrological Models

    Science.gov (United States)

    Alacron, Vladimir J.; Nigro, Joseph D.; McAnally, William H.; OHara, Charles G.; Engman, Edwin Ted; Toll, David

    2011-01-01

    This paper documents the use of simulated Moderate Resolution Imaging Spectroradiometer land use/land cover (MODIS-LULC), NASA-LIS generated precipitation and evapo-transpiration (ET), and Shuttle Radar Topography Mission (SRTM) datasets (in conjunction with standard land use, topographical and meteorological datasets) as input to hydrological models routinely used by the watershed hydrology modeling community. The study is focused in coastal watersheds in the Mississippi Gulf Coast although one of the test cases focuses in an inland watershed located in northeastern State of Mississippi, USA. The decision support tools (DSTs) into which the NASA datasets were assimilated were the Soil Water & Assessment Tool (SWAT) and the Hydrological Simulation Program FORTRAN (HSPF). These DSTs are endorsed by several US government agencies (EPA, FEMA, USGS) for water resources management strategies. These models use physiographic and meteorological data extensively. Precipitation gages and USGS gage stations in the region were used to calibrate several HSPF and SWAT model applications. Land use and topographical datasets were swapped to assess model output sensitivities. NASA-LIS meteorological data were introduced in the calibrated model applications for simulation of watershed hydrology for a time period in which no weather data were available (1997-2006). The performance of the NASA datasets in the context of hydrological modeling was assessed through comparison of measured and model-simulated hydrographs. Overall, NASA datasets were as useful as standard land use, topographical , and meteorological datasets. Moreover, NASA datasets were used for performing analyses that the standard datasets could not made possible, e.g., introduction of land use dynamics into hydrological simulations

  20. Modelling of two-phase flow based on separation of the flow according to velocity

    Energy Technology Data Exchange (ETDEWEB)

    Narumo, T. [VTT Energy, Espoo (Finland). Nuclear Energy

    1997-12-31

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors. 45 refs. The thesis includes also five previous publications by author.

  1. Modelling of two-phase flow based on separation of the flow according to velocity

    International Nuclear Information System (INIS)

    Narumo, T.

    1997-01-01

    The thesis concentrates on the development work of a physical one-dimensional two-fluid model that is based on Separation of the Flow According to Velocity (SFAV). The conventional way to model one-dimensional two-phase flow is to derive conservation equations for mass, momentum and energy over the regions occupied by the phases. In the SFAV approach, the two-phase mixture is divided into two subflows, with as distinct average velocities as possible, and momentum conservation equations are derived over their domains. Mass and energy conservation are treated equally with the conventional model because they are distributed very accurately according to the phases, but momentum fluctuations follow better the flow velocity. Submodels for non-uniform transverse profile of velocity and density, slip between the phases within each subflow and turbulence between the subflows have been derived. The model system is hyperbolic in any sensible flow conditions over the whole range of void fraction. Thus, it can be solved with accurate numerical methods utilizing the characteristics. The characteristics agree well with the used experimental data on two-phase flow wave phenomena Furthermore, the characteristics of the SFAV model are as well in accordance with their physical counterparts as of the best virtual-mass models that are typically optimized for special flow regimes like bubbly flow. The SFAV model has proved to be applicable in describing two-phase flow physically correctly because both the dynamics and steady-state behaviour of the model has been considered and found to agree well with experimental data This makes the SFAV model especially suitable for the calculation of fast transients, taking place in versatile form e.g. in nuclear reactors

  2. Fractional Gaussian noise-enhanced information capacity of a nonlinear neuron model with binary signal input

    Science.gov (United States)

    Gao, Feng-Yin; Kang, Yan-Mei; Chen, Xi; Chen, Guanrong

    2018-05-01

    This paper reveals the effect of fractional Gaussian noise with Hurst exponent H ∈(1 /2 ,1 ) on the information capacity of a general nonlinear neuron model with binary signal input. The fGn and its corresponding fractional Brownian motion exhibit long-range, strong-dependent increments. It extends standard Brownian motion to many types of fractional processes found in nature, such as the synaptic noise. In the paper, for the subthreshold binary signal, sufficient conditions are given based on the "forbidden interval" theorem to guarantee the occurrence of stochastic resonance, while for the suprathreshold binary signal, the simulated results show that additive fGn with Hurst exponent H ∈(1 /2 ,1 ) could increase the mutual information or bits count. The investigation indicated that the synaptic noise with the characters of long-range dependence and self-similarity might be the driving factor for the efficient encoding and decoding of the nervous system.

  3. Evaluation of globally available precipitation data products as input for water balance models

    Science.gov (United States)

    Lebrenz, H.; Bárdossy, A.

    2009-04-01

    Subject of this study is the evaluation of globally available precipitation data products, which are intended to be used as input variables for water balance models in ungauged basins. The selected data sources are a) the Global Precipitation Climatology Centre (GPCC), b) the Global Precipitation Climatology Project (GPCP) and c) the Climate Research Unit (CRU), resulting into twelve globally available data products. The data products imply different data bases, different derivation routines and varying resolutions in time and space. For validation purposes, the ground data from South Africa were screened on homogeneity and consistency by various tests and an outlier detection using multi-linear regression was performed. External Drift Kriging was subsequently applied on the ground data and the resulting precipitation arrays were compared to the different products with respect to quantity and variance.

  4. Modification of Spalart-Allmaras model with consideration of turbulence energy backscatter using velocity helicity

    International Nuclear Information System (INIS)

    Liu, Yangwei; Lu, Lipeng; Fang, Le; Gao, Feng

    2011-01-01

    The correlation between the velocity helicity and the energy backscatter is proved in a DNS case of 256 3 -grid homogeneous isotropic decaying turbulence. The helicity is then proposed to be employed to improve turbulence models and SGS models. Then Spalart-Allmaras turbulence model (SA) is modified with the helicity to take account of the energy backscatter, which is significant in the region of corner separation in compressors. By comparing the numerical results with experiments, it can be concluded that the modification for SA model with helicity can appropriately represent the energy backscatter, and greatly improves the predictive accuracy for simulating the corner separation flow in compressors. -- Highlights: → We study the relativity between the velocity helicity and the energy backscatter. → Spalart-Allmaras turbulence model is modified with the velocity helicity. → The modified model is employed to simulate corner separation in compressor cascade. → The modification can greatly improve the accuracy for predicting corner separation. → The helicity can represent the energy backscatter in turbulence and SGS models.

  5. Numerical Material Model for Composite Laminates in High-Velocity Impact Simulation

    Directory of Open Access Journals (Sweden)

    Tao Liu

    Full Text Available Abstract A numerical material model for composite laminate, was developed and integrated into the nonlinear dynamic explicit finite element programs as a material user subroutine. This model coupling nonlinear state of equation (EOS, was a macro-mechanics model, which was used to simulate the major mechanical behaviors of composite laminate under high-velocity impact conditions. The basic theoretical framework of the developed material model was introduced. An inverse flyer plate simulation was conducted, which demonstrated the advantage of the developed model in characterizing the nonlinear shock response. The developed model and its implementation were validated through a classic ballistic impact issue, i.e. projectile impacting on Kevlar29/Phenolic laminate. The failure modes and ballistic limit velocity were analyzed, and a good agreement was achieved when comparing with the analytical and experimental results. The computational capacity of this model, for Kevlar/Epoxy laminates with different architectures, i.e. plain-woven and cross-plied laminates, was further evaluated and the residual velocity curves and damage cone were accurately predicted.

  6. Minimum 1D P wave velocity model for the Cordillera Volcanica de Guanacaste, Costa Rica

    International Nuclear Information System (INIS)

    Araya, Maria C.; Linkimer, Lepolt; Taylor, Waldo

    2016-01-01

    A minimum velocity model is derived from 475 local earthquakes registered by the Observatorio Vulcanologico y Sismologico Arenal Miravalles (OSIVAM) for the Cordillera Volcanica de Guanacaste, between January 2006 and July 2014. The model has consisted of six layers from the surface up to 80 km the depth. The model has presented speeds varying between 3,96 and 7,79 km/s. The corrections obtained from the seismic stations have varied between -0,28 to 0,45, and they have shown a trend of positive values on the volcanic arc and negative on the forearc, in concordance with the crustal thickness. The relocation of earthquakes have presented three main groups of epicenters that could be associated with activity in inferred failures. The minimum ID velocity model has provided a simplified idea of the crustal structure and aims to contribute with the improvement of the routine location of earthquakes performed by OSIVAM. (author) [es

  7. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  8. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    OpenAIRE

    MURILLO, Carol Andrea; THOREL, Luc; CAICEDO, Bernardo

    2009-01-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge test...

  9. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  10. Velocity Model for CO2 Sequestration in the Southeastern United States Atlantic Continental Margin

    Science.gov (United States)

    Ollmann, J.; Knapp, C. C.; Almutairi, K.; Almayahi, D.; Knapp, J. H.

    2017-12-01

    The sequestration of carbon dioxide (CO2) is emerging as a major player in offsetting anthropogenic greenhouse gas emissions. With 40% of the United States' anthropogenic CO2 emissions originating in the southeast, characterizing potential CO2 sequestration sites is vital to reducing the United States' emissions. The goal of this research project, funded by the Department of Energy (DOE), is to estimate the CO2 storage potential for the Southeastern United States Atlantic Continental Margin. Previous studies find storage potential in the Atlantic continental margin. Up to 16 Gt and 175 Gt of storage potential are estimated for the Upper Cretaceous and Lower Cretaceous formations, respectively. Considering 2.12 Mt of CO2 are emitted per year by the United States, substantial storage potential is present in the Southeastern United States Atlantic Continental Margin. In order to produce a time-depth relationship, a velocity model must be constructed. This velocity model is created using previously collected seismic reflection, refraction, and well data in the study area. Seismic reflection horizons were extrapolated using well log data from the COST GE-1 well. An interpolated seismic section was created using these seismic horizons. A velocity model will be made using P-wave velocities from seismic reflection data. Once the time-depth conversion is complete, the depths of stratigraphic units in the seismic refraction data will be compared to the newly assigned depths of the seismic horizons. With a lack of well control in the study area, the addition of stratigraphic unit depths from 171 seismic refraction recording stations provides adequate data to tie to the depths of picked seismic horizons. Using this velocity model, the seismic reflection data can be presented in depth in order to estimate the thickness and storage potential of CO2 reservoirs in the Southeastern United States Atlantic Continental Margin.

  11. Development of a State-Wide 3-D Seismic Tomography Velocity Model for California

    Science.gov (United States)

    Thurber, C. H.; Lin, G.; Zhang, H.; Hauksson, E.; Shearer, P.; Waldhauser, F.; Hardebeck, J.; Brocher, T.

    2007-12-01

    We report on progress towards the development of a state-wide tomographic model of the P-wave velocity for the crust and uppermost mantle of California. The dataset combines first arrival times from earthquakes and quarry blasts recorded on regional network stations and travel times of first arrivals from explosions and airguns recorded on profile receivers and network stations. The principal active-source datasets are Geysers-San Pablo Bay, Imperial Valley, Livermore, W. Mojave, Gilroy-Coyote Lake, Shasta region, Great Valley, Morro Bay, Mono Craters-Long Valley, PACE, S. Sierras, LARSE 1 and 2, Loma Prieta, BASIX, San Francisco Peninsula and Parkfield. Our beta-version model is coarse (uniform 30 km horizontal and variable vertical gridding) but is able to image the principal features in previous separate regional models for northern and southern California, such as the high-velocity subducting Gorda Plate, upper to middle crustal velocity highs beneath the Sierra Nevada and much of the Coast Ranges, the deep low-velocity basins of the Great Valley, Ventura, and Los Angeles, and a high- velocity body in the lower crust underlying the Great Valley. The new state-wide model has improved areal coverage compared to the previous models, and extends to greater depth due to the data at large epicentral distances. We plan a series of steps to improve the model. We are enlarging and calibrating the active-source dataset as we obtain additional picks from investigators and perform quality control analyses on the existing and new picks. We will also be adding data from more quarry blasts, mainly in northern California, following an identification and calibration procedure similar to Lin et al. (2006). Composite event construction (Lin et al., in press) will be carried out for northern California for use in conventional tomography. A major contribution of the state-wide model is the identification of earthquakes yielding arrival times at both the Northern California Seismic

  12. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  13. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  14. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  15. Shear-wave velocity models and seismic sources in Campanian volcanic areas: Vesuvius and Phlegraean fields

    Energy Technology Data Exchange (ETDEWEB)

    Guidarelli, M; Zille, A; Sarao, A [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Natale, M; Nunziata, C [Dipartimento di Geofisica e Vulcanologia, Universita di Napoli ' Federico II' , Napoli (Italy); Panza, G F [Dipartimento di Scienze della Terra, Universita degli Studi di Trieste, Trieste (Italy); Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)

    2006-12-15

    This chapter summarizes a comparative study of shear-wave velocity models and seismic sources in the Campanian volcanic areas of Vesuvius and Phlegraean Fields. These velocity models were obtained through the nonlinear inversion of surface-wave tomography data, using as a priori constraints the relevant information available in the literature. Local group velocity data were obtained by means of the frequency-time analysis for the time period between 0.3 and 2 s and were combined with the group velocity data for the time period between 10 and 35 s from the regional events located in the Italian peninsula and bordering areas and two station phase velocity data corresponding to the time period between 25 and 100 s. In order to invert Rayleigh wave dispersion curves, we applied the nonlinear inversion method called hedgehog and retrieved average models for the first 30-35 km of the lithosphere, with the lower part of the upper mantle being kept fixed on the basis of existing regional models. A feature that is common to the two volcanic areas is a low shear velocity layer which is centered at the depth of about 10 km, while on the outside of the cone and along a path in the northeastern part of the Vesuvius area this layer is absent. This low velocity can be associated with the presence of partial melting and, therefore, may represent a quite diffused crustal magma reservoir which is fed by a deeper one that is regional in character and located in the uppermost mantle. The study of seismic source in terms of the moment tensor is suitable for an investigation of physical processes within a volcano; indeed, its components, double couple, compensated linear vector dipole, and volumetric, can be related to the movements of magma and fluids within the volcanic system. Although for many recent earthquake events the percentage of double couple component is high, our results also show the presence of significant non-double couple components in both volcanic areas. (author)

  16. Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern California

    Science.gov (United States)

    Thurber, C.; Zhang, H.; Brocher, T.; Langenheim, V.

    2009-01-01

    We present a three-dimensional (3D) tomographic model of the P wave velocity (Vp) structure of northern California. We employed a regional-scale double-difference tomography algorithm that incorporates a finite-difference travel time calculator and spatial smoothing constraints. Arrival times from earthquakes and travel times from controlled-source explosions, recorded at network and/or temporary stations, were inverted for Vp on a 3D grid with horizontal node spacing of 10 to 20 km and vertical node spacing of 3 to 8 km. Our model provides an unprecedented, comprehensive view of the regional-scale structure of northern California, putting many previously identified features into a broader regional context and improving the resolution of a number of them and revealing a number of new features, especially in the middle and lower crust, that have never before been reported. Examples of the former include the complex subducting Gorda slab, a steep, deeply penetrating fault beneath the Sacramento River Delta, crustal low-velocity zones beneath Geysers-Clear Lake and Long Valley, and the high-velocity ophiolite body underlying the Great Valley. Examples of the latter include mid-crustal low-velocity zones beneath Mount Shasta and north of Lake Tahoe. Copyright 2009 by the American Geophysical Union.

  17. A new chance-constrained DEA model with birandom input and output data

    OpenAIRE

    Tavana, M.; Shiraz, R. K.; Hatami-Marbini, A.

    2013-01-01

    The purpose of conventional Data Envelopment Analysis (DEA) is to evaluate the performance of a set of firms or Decision-Making Units using deterministic input and output data. However, the input and output data in the real-life performance evaluation problems are often stochastic. The stochastic input and output data in DEA can be represented with random variables. Several methods have been proposed to deal with the random input and output data in DEA. In this paper, we propose a new chance-...

  18. Models for assessing the relative phase velocity in a two-phase flow. Status report

    International Nuclear Information System (INIS)

    Schaffrath, A.; Ringel, H.

    2000-06-01

    The knowledge of slip or drift flux in two phase flow is necessary for several technical processes (e.g. two phase pressure losses, heat and mass transfer in steam generators and condensers, dwell period in chemical reactors, moderation effectiveness of two phase coolant in BWR). In the following the most important models for two phase flow with different phase velocities (e.g. slip or drift models, analogy between pressure loss and steam quality, ε - ε models and models for the calculation of void distribution in reposing fluids) are classified, described and worked up for a further comparison with own experimental data. (orig.)

  19. Modeling Atmospheric Turbulence via Rapid Distortion Theory: Spectral Tensor of Velocity and Buoyancy

    DEFF Research Database (Denmark)

    Chougule, Abhijit S.; Mann, Jakob; Kelly, Mark C.

    2017-01-01

    A spectral tensor model is presented for turbulent fluctuations of wind velocity components and temperature, assuming uniform vertical gradients in mean temperature and mean wind speed. The model is built upon rapid distortion theory (RDT) following studies by Mann and by Hanazaki and Hunt, using...... the eddy lifetime parameterization of Mann to make the model stationary. The buoyant spectral tensor model is driven via five parameters: the viscous dissipation rate epsilon, length scale of energy-containing eddies L, a turbulence anisotropy parameter Gamma, gradient Richardson number (Ri) representing...

  20. Three-dimensional models of P wave velocity and P-to-S velocity ratio in the southern central Andes by simultaneous inversion of local earthquake data

    Science.gov (United States)

    Graeber, Frank M.; Asch, Günter

    1999-09-01

    The PISCO'94 (Proyecto de Investigatión Sismológica de la Cordillera Occidental, 1994) seismological network of 31 digital broad band and short-period three-component seismometers was deployed in northern Chile between the Coastal Cordillera and the Western Cordillera. More than 5300 local seismic events were observed in a 100 day period. A subset of high-quality P and S arrival time data was used to invert simultaneously for hypocenters and velocity structure. Additional data from two other networks in the region could be included. The velocity models show a number of prominent anomalies, outlining an extremely thickened crust (about 70 km) beneath the forearc region, an anomalous crustal structure beneath the recent magmatic arc (Western Cordillera) characterized by very low velocities, and a high-velocity slab. A region of an increased Vp/Vs ratio has been found directly above the Wadati-Benioff zone, which might be caused by hydration processes. A zone of lower than average velocities and a high Vp/Vs ratio might correspond to the asthenospheric wedge. The upper edge of the Wadati-Benioff zone is sharply defined by intermediate depth hypocenters, while evidence for a double seismic zone can hardly be seen. Crustal events between the Precordillera and the Western Cordillera have been observed for the first time and are mainly located in the vicinity of the Salar de Atacama down to depths of about 40 km.

  1. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    Science.gov (United States)

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  2. EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

    International Nuclear Information System (INIS)

    Kaplan, D; Margaret Millings, M

    2006-01-01

    Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K d ) and solubility values (K sp ) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K d values >1000 mL/g and to a factor of two for K d values of sp values -6 M and to a factor of two for K d values of >10 -6 M. This decision is based on the literature. (3) The distribution of K d values with a mean >1000 mL/g will be log-normally distributed. Those with a K d value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007

  3. Realistic modeling of seismic input for megacities and large urban areas

    International Nuclear Information System (INIS)

    Panza, Giuliano F.; Alvarez, Leonardo; Aoudia, Abdelkrim

    2002-06-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  4. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  5. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  6. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  7. Modeling imbalanced economic recovery following a natural disaster using input-output analysis.

    Science.gov (United States)

    Li, Jun; Crawford-Brown, Douglas; Syddall, Mark; Guan, Dabo

    2013-10-01

    Input-output analysis is frequently used in studies of large-scale weather-related (e.g., Hurricanes and flooding) disruption of a regional economy. The economy after a sudden catastrophe shows a multitude of imbalances with respect to demand and production and may take months or years to recover. However, there is no consensus about how the economy recovers. This article presents a theoretical route map for imbalanced economic recovery called dynamic inequalities. Subsequently, it is applied to a hypothetical postdisaster economic scenario of flooding in London around the year 2020 to assess the influence of future shocks to a regional economy and suggest adaptation measures. Economic projections are produced by a macro econometric model and used as baseline conditions. The results suggest that London's economy would recover over approximately 70 months by applying a proportional rationing scheme under the assumption of initial 50% labor loss (with full recovery in six months), 40% initial loss to service sectors, and 10-30% initial loss to other sectors. The results also suggest that imbalance will be the norm during the postdisaster period of economic recovery even though balance may occur temporarily. Model sensitivity analysis suggests that a proportional rationing scheme may be an effective strategy to apply during postdisaster economic reconstruction, and that policies in transportation recovery and in health care are essential for effective postdisaster economic recovery. © 2013 Society for Risk Analysis.

  8. The efficiency of the agricultural sector in Poland in the light output-input model1

    Directory of Open Access Journals (Sweden)

    Czyżewski Andrzej

    2015-05-01

    Full Text Available The study turns attention to the use of the input-output model (account of interbranch flows in macroeconomic assessments of the effectiveness of the agricultural sector. In the introductory part the essence of the account of interbranch flows has been specified, pointing to its historical origin and place in the economic theory, and the morphological structure of the individual parts (quarters of the model has been presented. Then the study discusses the application of the account of interbranch flows in macroeconomic assessments of the effectiveness of the agricultural sector, defining and characterizing a number of indicators which allow to conclude on the effectiveness of the agricultural sector on the basis of the account of interbranch flows. The last, empirical part of the study assesses the effectiveness of the agricultural sector in Poland on the basis of interbranch flows statistics for the years 2000 and 2005. The analyses allowed to demonstrate increased efficiency of the agricultural sector in Poland after Poland joined the EU, and also to say that the account of interbranch flows is an important tool enabling comprehensive assessment of the effectiveness of the agricultural sector in the macro-scale, through the prism of the effect - disbursement, which accounts for its exceptional suitability in this kind of analyses.

  9. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  10. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  11. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  13. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie; Suhartono, Suhartono

    2017-01-01

    -searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those

  14. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  15. Engineering model for low-velocity impacts of multi-material cylinder on a rigid boundary

    Directory of Open Access Journals (Sweden)

    Delvare F.

    2012-08-01

    Full Text Available Modern ballistic problems involve the impact of multi-material projectiles. In order to model the impact phenomenon, different levels of analysis can be developed: empirical, engineering and simulation models. Engineering models are important because they allow the understanding of the physical phenomenon of the impact materials. However, some simplifications can be assumed to reduce the number of variables. For example, some engineering models have been developed to approximate the behavior of single cylinders when impacts a rigid surface. However, the cylinder deformation depends of its instantaneous velocity. At this work, an analytical model is proposed for modeling the behavior of a unique cylinder composed of two different metals cylinders over a rigid surface. Material models are assumed as rigid-perfectly plastic. Differential equation systems are solved using a numerical Runge-Kutta method. Results are compared with computational simulations using AUTODYN 2D hydrocode. It was found a good agreement between engineering model and simulation results. Model is limited by the impact velocity which is transition at the interface point given by the hydro dynamical pressure proposed by Tate.

  16. A vorticity transport model to restore spatial gaps in velocity data

    Science.gov (United States)

    Ameli, Siavash; Shadden, Shawn

    2017-11-01

    Often measurements of velocity data do not have full spatial coverage in the probed domain or near boundaries. These gaps can be due to missing measurements or masked regions of corrupted data. These gaps confound interpretation, and are problematic when the data is used to compute Lagrangian or trajectory-based analyses. Various techniques have been proposed to overcome coverage limitations in velocity data such as unweighted least square fitting, empirical orthogonal function analysis, variational interpolation as well as boundary modal analysis. In this talk, we present a vorticity transport PDE to reconstruct regions of missing velocity vectors. The transport model involves both nonlinear anisotropic diffusion and advection. This approach is shown to preserve the main features of the flow even in cases of large gaps, and the reconstructed regions are continuous up to second order. We illustrate results for high-frequency radar (HFR) measurements of the ocean surface currents as this is a common application of limited coverage. We demonstrate that the error of the method is on the same order of the error of the original velocity data. In addition, we have developed a web-based gateway for data restoration, and we will demonstrate a practical application using available data. This work is supported by the NSF Grant No. 1520825.

  17. Microthrix parvicella abundance associates with activated sludge settling velocity and rheology - Quantifying and modelling filamentous bulking

    DEFF Research Database (Denmark)

    Wágner, Dorottya Sarolta; Ramin, Elham; Szabo, Peter

    2015-01-01

    The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient...... and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational...... viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through...

  18. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  19. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  20. Hindrance Velocity Model for Phase Segregation in Suspensions of Poly-dispersed Randomly Oriented Spheroids

    Science.gov (United States)

    Faroughi, S. A.; Huber, C.

    2015-12-01

    Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with

  1. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  2. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  3. Smoke inputs to climate models: optical properties and height distribution for nuclear winter studies

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C. Jr.

    1985-04-01

    Smoke from fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in land surface temperatures. The extent of the decrease and even the sign of the temperature change depend on the optical characteristics of the smoke and how it is distributed with altitude. The height distribution of smoke over a fire is determined by the amount of buoyant energy produced by the fire and the amount of energy released by the latent heat of condensation of water vapor. The optical properties of the smoke depend on the size distribution of smoke particles which changes due to coagulation within the lofted plume. We present calculations demonstrating these processes and estimate their importance for the smoke source term input for climate models. For high initial smoke densities and for absorbing smoke ( m = 1.75 - 0.3i), coagulation of smoke particles within the smoke plume is predicted to first increase, then decrease, the size-integrated extinction cross section. However, at the smoke densities predicted in our model (assuming a 3% emission rate for smoke) and for our assumed initial size distribution, the attachment rates for brownian and turbulent collision processes are not fast enough to alter the smoke size distribution enough to significantly change the integrated extinction cross section. Early-time coagulation is, however, fast enough to allow further coagulation, on longer time scales, to act to decrease the extinction cross section. On these longer time scales appropriate to climate models, coagulation can decrease the extinction cross section by almost a factor of two before the smoke becomes well mixed around the globe. This process has been neglected in past climate effect evaluations, but could have a significant effect, since the extinction cross section enters as an exponential factor in calculating the light attenuation due to smoke. 10 refs., 20 figs

  4. Crustal and mantle velocity models of southern Tibet from finite frequency tomography

    Science.gov (United States)

    Liang, Xiaofeng; Shen, Yang; Chen, Yongshun John; Ren, Yong

    2011-02-01

    Using traveltimes of teleseismic body waves recorded by several temporary local seismic arrays, we carried out finite-frequency tomographic inversions to image the three-dimensional velocity structure beneath southern Tibet to examine the roles of the upper mantle in the formation of the Tibetan Plateau. The results reveal a region of relatively high P and S wave velocity anomalies extending from the uppermost mantle to at least 200 km depth beneath the Higher Himalaya. We interpret this high-velocity anomaly as the underthrusting Indian mantle lithosphere. There is a strong low P and S wave velocity anomaly that extends from the lower crust to at least 200 km depth beneath the Yadong-Gulu rift, suggesting that rifting in southern Tibet is probably a process that involves the entire lithosphere. Intermediate-depth earthquakes in southern Tibet are located at the top of an anomalous feature in the mantle with a low Vp, a high Vs, and a low Vp/Vs ratio. One possible explanation for this unusual velocity anomaly is the ongoing granulite-eclogite transformation. Together with the compressional stress from the collision, eclogitization and the associated negative buoyancy force offer a plausible mechanism that causes the subduction of the Indian mantle lithosphere beneath the Higher Himalaya. Our tomographic model and the observation of north-dipping lineations in the upper mantle suggest that the Indian mantle lithosphere has been broken laterally in the direction perpendicular to the convergence beneath the north-south trending rifts and subducted in a progressive, piecewise and subparallel fashion with the current one beneath the Higher Himalaya.

  5. A response analysis with effective stress model by using vertical input motions

    International Nuclear Information System (INIS)

    Yamanouchi, H.; Ohkawa, I.; Chiba, O.; Tohdo, M.; Kaneko, O.

    1987-01-01

    The nuclear power plant reactor buildings are to be directly supported on a hard soil as a rule in Japan. In case of determining the input motions in order to design those buildings, the amplifications of the hard soil deposits are examined by the total stress analysis in general. However, when the supporting hard soil is replaced with the slightly softer medium such as sandy or gravelly soil, the existence of pore water, in other words, the contribution of the pore water pressure to the total stress cannot be ignored even in a practical sense. In this paper the authors defined an analytical model considering the effective stress-strain relation. In the analyses, the response in the vertical direction is used to evaluate the confining pressure, at first. In the next step, the process of the generation and dissipation of the pore water pressure, is taken into account, together with the effect of the confining pressure. They applied these procedures for the response computations of the horizontally layered soil deposits

  6. Determination of the arterial input function in mouse-models using clinical MRI

    International Nuclear Information System (INIS)

    Theis, D.; Fachhochschule Giessen-Friedberg; Keil, B.; Heverhagen, J.T.; Klose, K.J.; Behe, M.; Fiebich, M.

    2008-01-01

    Dynamic contrast enhanced magnetic resonance imaging is a promising method for quantitative analysis of tumor perfusion and is increasingly used in study of cancer in small animal models. In those studies the determination of the arterial input function (AIF) of the target tissue can be the first step. Series of short-axis images of the heart were acquired during administration of a bolus of Gd-DTPA using saturation-recovery gradient echo pulse sequences. The AIF was determined from the changes of the signal intensity in the left ventricle. The native T1 relaxation times and AIF were determined for 11 mice. An average value of (1.16 ± 0.09) s for the native T1 relaxation time was measured. However, the AIF showed significant inter animal variability, as previously observed by other authors. The inter-animal variability shows, that a direct measurement of the AIF is reasonable to avoid significant errors. The proposed method for determination of the AIF proved to be reliable. (orig.)

  7. Multiregional input-output model for China's farm land and water use.

    Science.gov (United States)

    Guo, Shan; Shen, Geoffrey Qiping

    2015-01-06

    Land and water are the two main drivers of agricultural production. Pressure on farm land and water resources is increasing in China due to rising food demand. Domestic trade affects China's regional farm land and water use by distributing resources associated with the production of goods and services. This study constructs a multiregional input-output model to simultaneously analyze China's farm land and water uses embodied in consumption and interregional trade. Results show a great similarity for both China's farm land and water endowments. Shandong, Henan, Guangdong, and Yunnan are the most important drivers of farm land and water consumption in China, even though they have relatively few land and water resource endowments. Significant net transfers of embodied farm land and water flows are identified from the central and western areas to the eastern area via interregional trade. Heilongjiang is the largest farm land and water supplier, in contrast to Shanghai as the largest receiver. The results help policy makers to comprehensively understand embodied farm land and water flows in a complex economy network. Improving resource utilization efficiency and reshaping the embodied resource trade nexus should be addressed by considering the transfer of regional responsibilities.

  8. Process Debottlenecking and Retrofit of Palm Oil Milling Process via Inoperability Input-Output Modelling

    Directory of Open Access Journals (Sweden)

    May Tan May

    2018-01-01

    Full Text Available In recent years, there has been an increase in crude palm oil (CPO demand, resulting in palm oil mills (POMs seizing the opportunity to increase CPO production to make more profits. A series of equipment are designed to operate in their optimum capacities in the current existing POMs. Some equipment may be limited by their maximum design capacities when there is a need to increase CPO production, resulting in process bottlenecks. In this research, a framework is developed to provide stepwise procedures on identifying bottlenecks and retrofitting a POM process to cater for the increase in production capacity. This framework adapts an algebraic approach known as Inoperability Input-Output Modelling (IIM. To illustrate the application of the framework, an industrial POM case study was solved using LINGO software in this work, by maximising its production capacity. Benefit-to-Cost Ratio (BCR analysis was also performed to assess the economic feasibility. As results, the Screw Press was identified as the bottleneck. The retrofitting recommendation was to purchase an additional Screw Press to cater for the new throughput with BCR of 54.57. It was found the POM to be able to achieve the maximum targeted production capacity of 8,139.65 kg/hr of CPO without any bottlenecks.

  9. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  10. RUSLE2015: Modelling soil erosion at continental scale using high resolution input layers

    Science.gov (United States)

    Panagos, Panos; Borrelli, Pasquale; Meusburger, Katrin; Poesen, Jean; Ballabio, Cristiano; Lugato, Emanuele; Montanarella, Luca; Alewell, Christine

    2016-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation in the Europe. On the occasion of the 2015 celebration of the International Year of Soils, the European Commission's Joint Research Centre (JRC) published the RUSLE2015, a modified modelling approach for assessing soil erosion in Europe by using the best available input data layers. The objective of the recent assessment performed with RUSLE2015 was to improve our knowledge and understanding of soil erosion by water across the European Union and to accentuate the differences and similarities between different regions and countries beyond national borders and nationally adapted models. RUSLE2015 has maximized the use of available homogeneous, updated, pan-European datasets (LUCAS topsoil, LUCAS survey, GAEC, Eurostat crops, Eurostat Management Practices, REDES, DEM 25m, CORINE, European Soil Database) and have used the best suited approach at European scale for modelling soil erosion. The collaboration of JRC with many scientists around Europe and numerous prominent European universities and institutes resulted in an improved assessment of individual risk factors (rainfall erosivity, soil erodibility, cover-management, topography and support practices) and a final harmonized European soil erosion map at high resolution. The mean soil loss rate in the European Union's erosion-prone lands (agricultural, forests and semi-natural areas) was found to be 2.46 t ha-1 yr-1, resulting in a total soil loss of 970 Mt annually; equal to an area the size of Berlin (assuming a removal of 1 meter). According to the RUSLE2015 model approximately 12.7% of arable lands in the European Union is estimated to suffer from moderate to high erosion(>5 t ha-1 yr-1). This equates to an area of 140,373 km2 which equals to the surface area of Greece (Environmental Science & Policy, 54, 438-447; 2015). Even the mean erosion rate outstrips the mean formation rate (walls and contouring) through the common agricultural

  11. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  12. Low-velocity Impact Response of a Nanocomposite Beam Using an Analytical Model

    Directory of Open Access Journals (Sweden)

    Mahdi Heydari Meybodi

    Full Text Available AbstractLow-velocity impact of a nanocomposite beam made of glass/epoxy reinforced with multi-wall carbon nanotubes and clay nanoparticles is investigated in this study. Exerting modified rule of mixture (MROM, the mechanical properties of nanocomposite including matrix, nanoparticles or multi-wall carbon nanotubes (MWCNT, and fiber are attained. In order to analyze the low-velocity impact, Euler-Bernoulli beam theory and Hertz's contact law are simultaneously employed to govern the equations of motion. Using Ritz's variational approximation method, a set of nonlinear equations in time domain are obtained, which are solved using a fourth order Runge-Kutta method. The effect of different parameters such as adding nanoparticles or MWCNT's on maximum contact force and energy absorption, stacking sequence, geometrical dimensions (i.e., length, width and height, and initial velocity of the impactor have been studied comprehensively on dynamic behavior of the nanocomposite beam. In addition, the result of analytical model is compared with Finite Element Modeling (FEM.The results reveal that the effect of nanoparticles on energy absorption is more considerable at higher impact energies.

  13. Velocity statistics for interacting edge dislocations in one dimension from Dyson's Coulomb gas model.

    Science.gov (United States)

    Jafarpour, Farshid; Angheluta, Luiza; Goldenfeld, Nigel

    2013-10-01

    The dynamics of edge dislocations with parallel Burgers vectors, moving in the same slip plane, is mapped onto Dyson's model of a two-dimensional Coulomb gas confined in one dimension. We show that the tail distribution of the velocity of dislocations is power law in form, as a consequence of the pair interaction of nearest neighbors in one dimension. In two dimensions, we show the presence of a pairing phase transition in a system of interacting dislocations with parallel Burgers vectors. The scaling exponent of the velocity distribution at effective temperatures well below this pairing transition temperature can be derived from the nearest-neighbor interaction, while near the transition temperature, the distribution deviates from the form predicted by the nearest-neighbor interaction, suggesting the presence of collective effects.

  14. High-velocity two-phase flow two-dimensional modeling

    International Nuclear Information System (INIS)

    Mathes, R.; Alemany, A.; Thilbault, J.P.

    1995-01-01

    The two-phase flow in the nozzle of a LMMHD (liquid metal magnetohydrodynamic) converter has been studied numerically and experimentally. A two-dimensional model for two-phase flow has been developed including the viscous terms (dragging and turbulence) and the interfacial mass, momentum and energy transfer between the phases. The numerical results were obtained by a finite volume method based on the SIMPLE algorithm. They have been verified by an experimental facility using air-water as a simulation pair and a phase Doppler particle analyzer for velocity and droplet size measurement. The numerical simulation of a lithium-cesium high-temperature pair showed that a nearly homogeneous and isothermal expansion of the two phases is possible with small pressure losses and high kinetic efficiencies. In the throat region a careful profiling is necessary to reduce the inertial effects on the liquid velocity field

  15. The Dynamics of M15: Observations of the Velocity Dispersion Profile and Fokker-Planck Models

    Science.gov (United States)

    Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Murphy, B. W.; Seitzer, P. O.; Callanan, P. J.; Rutten, R. G. M.; Charles, P. A.

    1997-05-01

    We report a new measurement of the velocity dispersion profile within 1' (3 pc) of the center of the globular cluster M15 (NGC 7078), using long-slit spectra from the 4.2 m William Herschel Telescope at La Palma Observatory. We obtained spatially resolved spectra for a total of 23 slit positions during two observing runs. During each run, a set of parallel slit positions was used to map out the central region of the cluster; the position angle used during the second run was orthogonal to that used for the first. The spectra are centered in wavelength near the Ca II infrared triplet at 8650 Å, with a spectral range of about 450 Å. We determined radial velocities by cross-correlation techniques for 131 cluster members. A total of 32 stars were observed more than once. Internal and external comparisons indicate a velocity accuracy of about 4 km s-1. The velocity dispersion profile rises from about σ = 7.2 +/- 1.4 km s-1 near 1' from the center of the cluster to σ = 13.9 +/- 1.8 km s-1 at 20". Inside of 20", the dispersion remains approximately constant at about 10.2 +/- 1.4 km s-1 with no evidence for a sharp rise near the center. This last result stands in contrast with that of Peterson, Seitzer, & Cudworth who found a central velocity dispersion of 25 +/- 7 km s-1, based on a line-broadening measurement. Our velocity dispersion profile is in good agreement with those determined in the recent studies of Gebhardt et al. and Dubath & Meylan. We have developed a new set of Fokker-Planck models and have fitted these to the surface brightness and velocity dispersion profiles of M15. We also use the two measured millisecond pulsar accelerations as constraints. The best-fitting model has a mass function slope of x = 0.9 (where 1.35 is the slope of the Salpeter mass function) and a total mass of 4.9 × 105 M⊙. This model contains approximately 104 neutron stars (3% of the total mass), the majority of which lie within 6" (0.2 pc) of the cluster center. Since the

  16. A new approach to modeling temperature-related mortality: Non-linear autoregressive models with exogenous input.

    Science.gov (United States)

    Lee, Cameron C; Sheridan, Scott C

    2018-07-01

    Temperature-mortality relationships are nonlinear, time-lagged, and can vary depending on the time of year and geographic location, all of which limits the applicability of simple regression models in describing these associations. This research demonstrates the utility of an alternative method for modeling such complex relationships that has gained recent traction in other environmental fields: nonlinear autoregressive models with exogenous input (NARX models). All-cause mortality data and multiple temperature-based data sets were gathered from 41 different US cities, for the period 1975-2010, and subjected to ensemble NARX modeling. Models generally performed better in larger cities and during the winter season. Across the US, median absolute percentage errors were 10% (ranging from 4% to 15% in various cities), the average improvement in the r-squared over that of a simple persistence model was 17% (6-24%), and the hit rate for modeling spike days in mortality (>80th percentile) was 54% (34-71%). Mortality responded acutely to hot summer days, peaking at 0-2 days of lag before dropping precipitously, and there was an extended mortality response to cold winter days, peaking at 2-4 days of lag and dropping slowly and continuing for multiple weeks. Spring and autumn showed both of the aforementioned temperature-mortality relationships, but generally to a lesser magnitude than what was seen in summer or winter. When compared to distributed lag nonlinear models, NARX model output was nearly identical. These results highlight the applicability of NARX models for use in modeling complex and time-dependent relationships for various applications in epidemiology and environmental sciences. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Dry deposition models for radionuclides dispersed in air: a new approach for deposition velocity evaluation schema

    Science.gov (United States)

    Giardina, M.; Buffa, P.; Cervone, A.; De Rosa, F.; Lombardo, C.; Casamirra, M.

    2017-11-01

    In the framework of a National Research Program funded by the Italian Minister of Economic Development, the Department of Energy, Information Engineering and Mathematical Models (DEIM) of Palermo University and ENEA Research Centre of Bologna, Italy are performing several research activities to study physical models and mathematical approaches aimed at investigating dry deposition mechanisms of radioactive pollutants. On the basis of such studies, a new approach to evaluate the dry deposition velocity for particles is proposed. Comparisons with some literature experimental data show that the proposed dry deposition scheme can capture the main phenomena involved in the dry deposition process successfully.

  18. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    Science.gov (United States)

    Murillo, Carol Andrea; Thorel, Luc; Caicedo, Bernardo

    2009-06-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge testing is a relevant method to characterize VS near the surface.

  19. Multiple Model Adaptive Attitude Control of LEO Satellite with Angular Velocity Constraints

    Science.gov (United States)

    Shahrooei, Abolfazl; Kazemi, Mohammad Hosein

    2018-04-01

    In this paper, the multiple model adaptive control is utilized to improve the transient response of attitude control system for a rigid spacecraft. An adaptive output feedback control law is proposed for attitude control under angular velocity constraints and its almost global asymptotic stability is proved. The multiple model adaptive control approach is employed to counteract large uncertainty in parameter space of the inertia matrix. The nonlinear dynamics of a low earth orbit satellite is simulated and the proposed control algorithm is implemented. The reported results show the effectiveness of the suggested scheme.

  20. Critique of the use of deposition velocity in modeling indoor air quality

    International Nuclear Information System (INIS)

    Nazaroff, W.W.; Weschler, C.J.

    1993-01-01

    Among the potential fates of indoor air pollutants are a variety of physical and chemical interactions with indoor surfaces. In deterministic mathematical models of indoor air quality, these interactions are usually represented as a first-order loss process, with the loss rate coefficient given as the product of the surface-to-volume ratio of the room times a deposition velocity. In this paper, the validity of this representation of surface-loss mechanisms is critically evaluated. From a theoretical perspective, the idea of a deposition velocity is consistent with the following representation of an indoor air environment. Pollutants are well-mixed throughout a core region which is separated from room surfaces by boundary layers. Pollutants migrate through the boundary layers by a combination of diffusion (random motion resulting from collisions with surrounding gas molecules), advection (transport by net motion of the fluid), and, in some cases, other transport mechanisms. The rate of pollutant loss to a surface is governed by a combination of the rate of transport through the boundary layer and the rate of reaction at the surface. The deposition velocity expresses the pollutant flux density (mass or moles deposited per area per time) to the surface divided by the pollutant concentration in the core region. This concept has substantial value to the extent that the flux density is proportional to core concentration. Published results from experimental and modeling studies of fine particles, radon decay products, ozone, and nitrogen oxides are used as illustrations of both the strengths and weaknesses of deposition velocity as a parameter to indicate the rate of indoor air pollutant loss on surfaces. 66 refs., 5 tabs

  1. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    Energy Technology Data Exchange (ETDEWEB)

    Wardaya, P. D., E-mail: pongga.wardaya@utp.edu.my; Noh, K. A. B. M., E-mail: pongga.wardaya@utp.edu.my; Yusoff, W. I. B. W., E-mail: pongga.wardaya@utp.edu.my [Petroleum Geosciences Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Ridha, S. [Petroleum Engineering Department, Universiti Teknologi PETRONAS, Tronoh, Perak, 31750 (Malaysia); Nurhandoko, B. E. B. [Wave Inversion and Subsurface Fluid Imaging Research Laboratory (WISFIR), Dept. of Physics, Institute of Technology Bandung, Bandung, Indonesia and Rock Fluid Imaging Lab, Bandung (Indonesia)

    2014-09-25

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic

  2. The thin section rock physics: Modeling and measurement of seismic wave velocity on the slice of carbonates

    International Nuclear Information System (INIS)

    Wardaya, P. D.; Noh, K. A. B. M.; Yusoff, W. I. B. W.; Ridha, S.; Nurhandoko, B. E. B.

    2014-01-01

    This paper discusses a new approach for investigating the seismic wave velocity of rock, specifically carbonates, as affected by their pore structures. While the conventional routine of seismic velocity measurement highly depends on the extensive laboratory experiment, the proposed approach utilizes the digital rock physics view which lies on the numerical experiment. Thus, instead of using core sample, we use the thin section image of carbonate rock to measure the effective seismic wave velocity when travelling on it. In the numerical experiment, thin section images act as the medium on which wave propagation will be simulated. For the modeling, an advanced technique based on artificial neural network was employed for building the velocity and density profile, replacing image's RGB pixel value with the seismic velocity and density of each rock constituent. Then, ultrasonic wave was simulated to propagate in the thin section image by using finite difference time domain method, based on assumption of an acoustic-isotropic medium. Effective velocities were drawn from the recorded signal and being compared to the velocity modeling from Wyllie time average model and Kuster-Toksoz rock physics model. To perform the modeling, image analysis routines were undertaken for quantifying the pore aspect ratio that is assumed to represent the rocks pore structure. In addition, porosity and mineral fraction required for velocity modeling were also quantified by using integrated neural network and image analysis technique. It was found that the Kuster-Toksoz gives the closer prediction to the measured velocity as compared to the Wyllie time average model. We also conclude that Wyllie time average that does not incorporate the pore structure parameter deviates significantly for samples having more than 40% porosity. Utilizing this approach we found a good agreement between numerical experiment and theoretically derived rock physics model for estimating the effective seismic wave

  3. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  4. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  5. Evaluation of precipitation input for SWAT modeling in Alpine catchment: A case study in the Adige river basin (Italy).

    Science.gov (United States)

    Tuo, Ye; Duan, Zheng; Disse, Markus; Chiogna, Gabriele

    2016-12-15

    Precipitation is often the most important input data in hydrological models when simulating streamflow. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauge station that is nearest to the centroid of each subbasin, which is eventually corrected using the elevation band method. This leads in general to inaccurate representation of subbasin precipitation input data, particularly in catchments with complex topography. To investigate the impact of different precipitation inputs on the SWAT model simulations in Alpine catchments, 13years (1998-2010) of daily precipitation data from four datasets including OP (Observed precipitation), IDW (Inverse Distance Weighting data), CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) and TRMM (Tropical Rainfall Measuring Mission) has been considered. Both model performances (comparing simulated and measured streamflow data at the catchment outlet) as well as parameter and prediction uncertainties have been quantified. For all three subbasins, the use of elevation bands is fundamental to match the water budget. Streamflow predictions obtained using IDW inputs are better than those obtained using the other datasets in terms of both model performance and prediction uncertainty. Models using the CHIRPS product as input provide satisfactory streamflow estimation, suggesting that this satellite product can be applied to this data-scarce Alpine region. Comparing the performance of SWAT models using different precipitation datasets is therefore important in data-scarce regions. This study has shown that, precipitation is the main source of uncertainty, and different precipitation datasets in SWAT models lead to different best estimate ranges for the calibrated parameters. This has important implications for the interpretation of the simulated hydrological processes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: a shared input DEA-model.

    Science.gov (United States)

    Rogge, Nicky; De Jaeger, Simon

    2012-10-01

    This paper proposed an adjusted "shared-input" version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: A shared input DEA-model

    International Nuclear Information System (INIS)

    Rogge, Nicky; De Jaeger, Simon

    2012-01-01

    Highlights: ► Complexity in local waste management calls for more in depth efficiency analysis. ► Shared-input Data Envelopment Analysis can provide solution. ► Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted “shared-input” version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities’ cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.

  8. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  9. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  10. Velocity-mass correlation of the O-type stars: model results

    International Nuclear Information System (INIS)

    Stone, R.C.

    1982-01-01

    This paper presents new model results describing the evolution of massive close binaries from their initial ZAMS to post-supernova stages. Unlike the previous conservative study by Stone [Astrophys. J. 232, 520 (1979) (Paper II)], these results allow explicitly for mass loss from the binary system occurring during the core hydrogen- and helium-burning stages of the primary binary star as well as during the Roche lobe overflow. Because of uncertainties in these rates, model results are given for several reasonable choices for these rates. All of the models consistently predict an increasing relation between the peculiar space velocities and masses for runaway OB stars which agrees well with the observed correlations discussed in Stone [Astron. J. 86, 544 (1981) (Paper III)] and also predict a lower limit at Mroughly-equal11M/sub sun/ for the masses of runaway stars, in agreement with the observational limit found by A. Blaauw (Bull. Astron. Inst. Neth. 15, 265, 1961), both of which support the binary-supernova scenario described by van den Heuvel and Heise for the origin of runaway stars. These models also predict that the more massive O stars will produce correspondingly more massive compact remnants, and that most binaries experiencing supernova-induced kick velocities of magnitude V/sub k/> or approx. =300 km s -1 will disrupt following the explosions. The best estimate for this velocity as established from pulsar observations is V/sub k/roughly-equal150 km s -1 , in which case probably only 15% if these binaries will be disrupted by the supernova explosions, and therefore, almost all runaway stars should have either neutron star or black hole companions

  11. A fifth equation to model the relative velocity the 3-D thermal-hydraulic code THYC

    International Nuclear Information System (INIS)

    Jouhanique, T.; Rascle, P.

    1995-11-01

    E.D.F. has developed, since 1986, a general purpose code named THYC (Thermal HYdraulic Code) designed to study three-dimensional single and two-phase flows in rod tube bundles (pressurised water reactor cores, steam generators, condensers, heat exchangers). In these studies, the relative velocity was calculated by a drift-flux correlation. However, the relative velocity between vapor and liquid is an important parameter for the accuracy of a two-phase flow modelling in a three-dimensional code. The range of application of drift-flux correlations is mainly limited by the characteristic of the flow pattern (counter current flow ...) and by large 3-D effects. The purpose of this paper is to describe a numerical scheme which allows the relative velocity to be computed in a general case. Only the methodology is investigated in this paper which is not a validation work. The interfacial drag force is an important factor of stability and accuracy of the results. This force, closely dependent on the flow pattern, is not entirely established yet, so a range of multiplicator of its expression is used to compare the numerical results with the VATICAN test section measurements. (authors). 13 refs., 6 figs

  12. An analytical model for displacement velocity of liquid film on a hot vertical surface

    International Nuclear Information System (INIS)

    Yoshioka, Keisuke; Hasegawa, Shu

    1975-01-01

    The downward progress of the advancing front of a liquid film streaming down a heated vertical surface, as it would occur in emergency core cooling, is much slower than in the case of ordinary streaming down along a heated surface already wetted with the liquid. A two-dimensional heat conduction model is developed for evaluating this velocity of the liquid front, which takes account of the heat removal by ordinary flow boiling mechanism. In the analysis, the maximum heat flux and the calefaction temperature are taken up as parameters in addition to the initial dry heated wall temperature, the flow rate and the velocity of downward progress of the liquid front. The temperature profile is calculated for various combinations of these parameters. Two criteria are proposed for choosing the most suitable combination of the parameters. One is to reject solutions that represent an oscillating wall temperature distribution, and the second criterion requires that the length of the zone of violent boiling immediately following the liquid front should not be longer than about 1 mm, this value being determined from comparisons made between experiment and calculation. Application of the above two criteria resulted in reasonable values obtained for the calefaction temperature and the maximum heat flux, and the velocity of the liquid front derived therefrom showed good agreement with experiment. (auth.)

  13. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  14. Lithospheric structure of the Arabian Shield and Platform from complete regional waveform modelling and surface wave group velocities

    Science.gov (United States)

    Rodgers, Arthur J.; Walter, William R.; Mellors, Robert J.; Al-Amri, Abdullah M. S.; Zhang, Yu-Shen

    1999-09-01

    Regional seismic waveforms reveal significant differences in the structure of the Arabian Shield and the Arabian Platform. We estimate lithospheric velocity structure by modelling regional waveforms recorded by the 1995-1997 Saudi Arabian Temporary Broadband Deployment using a grid search scheme. We employ a new method whereby we narrow the waveform modelling grid search by first fitting the fundamental mode Love and Rayleigh wave group velocities. The group velocities constrain the average crustal thickness and velocities as well as the crustal velocity gradients. Because the group velocity fitting is computationally much faster than the synthetic seismogram calculation this method allows us to determine good average starting models quickly. Waveform fits of the Pn and Sn body wave arrivals constrain the mantle velocities. The resulting lithospheric structures indicate that the Arabian Platform has an average crustal thickness of 40 km, with relatively low crustal velocities (average crustal P- and S-wave velocities of 6.07 and 3.50 km s^-1 , respectively) without a strong velocity gradient. The Moho is shallower (36 km) and crustal velocities are 6 per cent higher (with a velocity increase with depth) for the Arabian Shield. Fast crustal velocities of the Arabian Shield result from a predominantly mafic composition in the lower crust. Lower velocities in the Arabian Platform crust indicate a bulk felsic composition, consistent with orogenesis of this former active margin. P- and S-wave velocities immediately below the Moho are slower in the Arabian Shield than in the Arabian Platform (7.9 and 4.30 km s^-1 , and 8.10 and 4.55 km s^-1 , respectively). This indicates that the Poisson's ratios for the uppermost mantle of the Arabian Shield and Platform are 0.29 and 0.27, respectively. The lower mantle velocities and higher Poisson's ratio beneath the Arabian Shield probably arise from a partially molten mantle associated with Red Sea spreading and continental

  15. Softverski model estimatora radijalne brzine ciljeva / Software model of a radial velocity estimator

    Directory of Open Access Journals (Sweden)

    Dejan S. Ivković

    2010-04-01

    Full Text Available U radu je softverski modelovan novi blok u delu za obradu signala softverskog radarskog prijemnika, koji je nazvan estimator radijalne brzine. Detaljno je opisan način procene Doplerove frekvencije na osnovu MUSIC algoritma i ukratko prikazan način rada pri merenju. Svi parametri pri merenju klatera i detekcije simuliranih i realnih ciljeva dati su tabelarno, a rezultati grafički. Na osnovu analize prikazanih rezultata može se zaključiti da se pomoću projektovanog estimatora radijalne brzine može precizno proceniti Doplerov pomak u reflektovanom signalu od pokretnog cilja, a samim tim može se precizno odrediti njegova brzina. / In all analyses the MUSIC method has given better results than the FFT method. The MUSIC method proved to be better at estimation precision as well as at resolving two adjacent Doppler frequencies. On the basis of the obtained results, the designed estimator of radial velocity can be said to estimate Doppler frequency in the reflected signal from a moving target precisely, and, consequently, the target velocity. It is thus possible to improve the performances of the current radar as far as a precise estimation of velocity of detected moving targets is concerned.

  16. Modeling of liquid ceramic precursor droplets in a high velocity oxy-fuel flame jet

    International Nuclear Information System (INIS)

    Basu, Saptarshi; Cetegen, Baki M.

    2008-01-01

    Production of coatings by high velocity oxy-fuel (HVOF) flame jet processing of liquid precursor droplets can be an attractive alternative method to plasma processing. This article concerns modeling of the thermophysical processes in liquid ceramic precursor droplets injected into an HVOF flame jet. The model consists of several sub-models that include aerodynamic droplet break-up, heat and mass transfer within individual droplets exposed to the HVOF environment and precipitation of ceramic precursors. A parametric study is presented for the initial droplet size, concentration of the dissolved salts and the external temperature and velocity field of the HVOF jet to explore processing conditions and injection parameters that lead to different precipitate morphologies. It is found that the high velocity of the jet induces shear break-up into several μm diameter droplets. This leads to better entrainment and rapid heat-up in the HVOF jet. Upon processing, small droplets (<5 μm) are predicted to undergo volumetric precipitation and form solid particles prior to impact at the deposit location. Droplets larger than 5 μm are predicted to form hollow or precursor containing shells similar to those processed in a DC arc plasma. However, it is found that the lower temperature of the HVOF jet compared to plasma results in slower vaporization and solute mass diffusion time inside the droplet, leading to comparatively thicker shells. These shell-type morphologies may further experience internal pressurization, resulting in possibly shattering and secondary atomization of the trapped liquid. The consequences of these different particle states on the coating microstructure are also discussed in this article

  17. Physical-mathematical model for cybernetic description of the human organs with trace element concentrations as input variables

    International Nuclear Information System (INIS)

    Mihai, Maria; Popescu, I.V.

    2003-01-01

    In this paper we report a physical-mathematical model for studying the organs and humans fluids by cybernetic principle. The input variables represent the trace elements which are determined by atomic and nuclear methods of elemental analysis. We have determined the health limits between which the organs might function. (authors)

  18. A single point of pressure approach as input for injury models with respect to complex blast loading conditions

    NARCIS (Netherlands)

    Teland, J.A.; Doormaal, J.C.A.M. van; Horst, M.J. van der; Svinsås, E.

    2010-01-01

    Blast injury models, like Axelsson and Stuhmiller, require four pressure signals as input. Those pressure signals must be acquired by a Blast Test Device (BTD) that has four pressure transducers placed in a horizontal plane at intervals of 90 degrees. This can be either in a physical test setup or

  19. Sterile Neutrinos, Dark Matter, and Pulsar Velocities in Models with a Higgs Singlet

    International Nuclear Information System (INIS)

    Kusenko, Alexander

    2006-01-01

    We identify the range of parameters for which the sterile neutrinos can simultaneously explain the cosmological dark matter and the observed velocities of pulsars. To satisfy all cosmological bounds, the relic sterile neutrinos must be produced sufficiently cold. This is possible in a class of models with a gauge-singlet Higgs boson coupled to the neutrinos. Sterile dark matter can be detected by the x-ray telescopes. The presence of the singlet in the Higgs sector can be tested at the CERN Large Hadron Collider

  20. Synchronous Surface Pressure and Velocity Measurements of standard model in hypersonic flow

    Directory of Open Access Journals (Sweden)

    Zhijun Sun

    2018-01-01

    Full Text Available Experiments in the Hypersonic Wind tunnel of NUAA(NHW present synchronous measurements of bow shockwave and surface pressure of a standard blunt rotary model (AGARD HB-2, which was carried out in order to measure the Mach-5-flow above a blunt body by PIV (Particle Image Velocimetry as well as unsteady pressure around the rotary body. Titanium dioxide (Al2O3 Nano particles were seeded into the flow by a tailor-made container. With meticulous care designed optical path, the laser was guided into the vacuum experimental section. The transient pressure was obtained around model by using fast-responding pressure-sensitive paint (PSPsprayed on the model. All the experimental facilities were controlled by Series Pulse Generator to ensure that the data was time related. The PIV measurements of velocities in front of the detached bow shock agreed very well with the calculated value, with less than 3% difference compared to Pitot-pressure recordings. The velocity gradient contour described in accord with the detached bow shock that showed on schlieren. The PSP results presented good agreement with the reference data from previous studies. Our work involving studies of synchronous shock-wave and pressure measurements proved to be encouraging.

  1. Effect of stimulation on the input parameters of stochastic leaky integrate-and-fire neuronal model

    Czech Academy of Sciences Publication Activity Database

    Lánský, Petr; Šanda, Pavel; He, J.

    2010-01-01

    Roč. 104, 3-4 (2010), s. 160-166 ISSN 0928-4257 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : membrane depolarization * input parameters * diffusion Subject RIV: BO - Biophysics Impact factor: 3.030, year: 2010

  2. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  3. Impact of Infralimbic Inputs on Intercalated Amygdale Neurons: A Biophysical Modeling Study

    Science.gov (United States)

    Li, Guoshi; Amano, Taiju; Pare, Denis; Nair, Satish S.

    2011-01-01

    Intercalated (ITC) amygdala neurons regulate fear expression by controlling impulse traffic between the input (basolateral amygdala; BLA) and output (central nucleus; Ce) stations of the amygdala for conditioned fear responses. Previously, stimulation of the infralimbic (IL) cortex was found to reduce fear expression and the responsiveness of Ce…

  4. Development of a Duplex Ultrasound Simulator and Preliminary Validation of Velocity Measurements in Carotid Artery Models.

    Science.gov (United States)

    Zierler, R Eugene; Leotta, Daniel F; Sansom, Kurt; Aliseda, Alberto; Anderson, Mark D; Sheehan, Florence H

    2016-07-01

    Duplex ultrasound scanning with B-mode imaging and both color Doppler and Doppler spectral waveforms is relied upon for diagnosis of vascular pathology and selection of patients for further evaluation and treatment. In most duplex ultrasound applications, classification of disease severity is based primarily on alterations in blood flow velocities, particularly the peak systolic velocity (PSV) obtained from Doppler spectral waveforms. We developed a duplex ultrasound simulator for training and assessment of scanning skills. Duplex ultrasound cases were prepared from 2-dimensional (2D) images of normal and stenotic carotid arteries by reconstructing the common carotid, internal carotid, and external carotid arteries in 3 dimensions and computationally simulating blood flow velocity fields within the lumen. The simulator displays a 2D B-mode image corresponding to transducer position on a mannequin, overlaid by color coding of velocity data. A spectral waveform is generated according to examiner-defined settings (depth and size of the Doppler sample volume, beam steering, Doppler beam angle, and pulse repetition frequency or scale). The accuracy of the simulator was assessed by comparing the PSV measured from the spectral waveforms with the true PSV which was derived from the computational flow model based on the size and location of the sample volume within the artery. Three expert examiners made a total of 36 carotid artery PSV measurements based on the simulated cases. The PSV measured by the examiners deviated from true PSV by 8% ± 5% (N = 36). The deviation in PSV did not differ significantly between artery segments, normal and stenotic arteries, or examiners. To our knowledge, this is the first simulation of duplex ultrasound that can create and display real-time color Doppler images and Doppler spectral waveforms. The results demonstrate that an examiner can measure PSV from the spectral waveforms using the settings on the simulator with a mean absolute error

  5. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  6. A fault‐based model for crustal deformation in the western United States based on a combined inversion of GPS and geologic inputs

    Science.gov (United States)

    Zeng, Yuehua; Shen, Zheng-Kang

    2017-01-01

    We develop a crustal deformation model to determine fault‐slip rates for the western United States (WUS) using the Zeng and Shen (2014) method that is based on a combined inversion of Global Positioning System (GPS) velocities and geological slip‐rate constraints. The model consists of six blocks with boundaries aligned along major faults in California and the Cascadia subduction zone, which are represented as buried dislocations in the Earth. Faults distributed within blocks have their geometrical structure and locking depths specified by the Uniform California Earthquake Rupture Forecast, version 3 (UCERF3) and the 2008 U.S. Geological Survey National Seismic Hazard Map Project model. Faults slip beneath a predefined locking depth, except for a few segments where shallow creep is allowed. The slip rates are estimated using a least‐squares inversion. The model resolution analysis shows that the resulting model is influenced heavily by geologic input, which fits the UCERF3 geologic bounds on California B faults and ±one‐half of the geologic slip rates for most other WUS faults. The modeled slip rates for the WUS faults are consistent with the observed GPS velocity field. Our fit to these velocities is measured in terms of a normalized chi‐square, which is 6.5. This updated model fits the data better than most other geodetic‐based inversion models. Major discrepancies between well‐resolved GPS inversion rates and geologic‐consensus rates occur along some of the northern California A faults, the Mojave to San Bernardino segments of the San Andreas fault, the western Garlock fault, the southern segment of the Wasatch fault, and other faults. Off‐fault strain‐rate distributions are consistent with regional tectonics, with a total off‐fault moment rate of 7.2×1018">7.2×1018 and 8.5×1018  N·m/year">8.5×1018  N⋅m/year for California and the WUS outside California, respectively.

  7. Sheep as a large animal ear model: Middle-ear ossicular velocities and intracochlear sound pressure.

    Science.gov (United States)

    Péus, Dominik; Dobrev, Ivo; Prochazka, Lukas; Thoele, Konrad; Dalbert, Adrian; Boss, Andreas; Newcomb, Nicolas; Probst, Rudolf; Röösli, Christof; Sim, Jae Hoon; Huber, Alexander; Pfiffner, Flurin

    2017-08-01

    Animals are frequently used for the development and testing of new hearing devices. Dimensions of the middle ear and cochlea differ significantly between humans and commonly used animals, such as rodents or cats. The sheep cochlea is anatomically more like the human cochlea in size and number of turns. This study investigated the middle-ear ossicular velocities and intracochlear sound pressure (ICSP) in sheep temporal bones, with the aim of characterizing the sheep as an experimental model for implantable hearing devices. Measurements were made on fresh sheep temporal bones. Velocity responses of the middle ear ossicles at the umbo, long process of the incus and stapes footplate were measured in the frequency range of 0.25-8 kHz using a laser Doppler vibrometer system. Results were normalized by the corresponding sound pressure level in the external ear canal (P EC ). Sequentially, ICSPs at the scala vestibuli and tympani were then recorded with custom MEMS-based hydrophones, while presenting identical acoustic stimuli. The sheep middle ear transmitted most effectively around 4.8 kHz, with a maximum stapes velocity of 0.2 mm/s/Pa. At the same frequency, the ICSP measurements in the scala vestibuli and tympani showed the maximum gain relative to the P EC (24 dB and 5 dB, respectively). The greatest pressure difference across the cochlear partition occurred between 4 and 6 kHz. A comparison between the results of this study and human reference data showed middle-ear resonance and best cochlear sensitivity at higher frequencies in sheep. In summary, sheep can be an appropriate large animal model for research and development of implantable hearing devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  9. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  11. Modeling the Impacts of Suspended Sediment Concentration and Current Velocity on Submersed Vegetation in an Illinois River Pool, USA

    National Research Council Canada - National Science Library

    Best, Elly

    2004-01-01

    This technical note uses a modeling approach to examine the impacts of suspended sediment concentrations and current velocity on the persistence of submersed macrophytes in a shallow aquatic system...

  12. A new car-following model for autonomous vehicles flow with mean expected velocity field

    Science.gov (United States)

    Wen-Xing, Zhu; Li-Dong, Zhang

    2018-02-01

    Due to the development of the modern scientific technology, autonomous vehicles may realize to connect with each other and share the information collected from each vehicle. An improved forward considering car-following model was proposed with mean expected velocity field to describe the autonomous vehicles flow behavior. The new model has three key parameters: adjustable sensitivity, strength factor and mean expected velocity field size. Two lemmas and one theorem were proven as criteria for judging the stability of homogeneousautonomous vehicles flow. Theoretical results show that the greater parameters means larger stability regions. A series of numerical simulations were carried out to check the stability and fundamental diagram of autonomous flow. From the numerical simulation results, the profiles, hysteresis loop and density waves of the autonomous vehicles flow were exhibited. The results show that with increased sensitivity, strength factor or field size the traffic jam was suppressed effectively which are well in accordance with the theoretical results. Moreover, the fundamental diagrams corresponding to three parameters respectively were obtained. It demonstrates that these parameters play almost the same role on traffic flux: i.e. before the critical density the bigger parameter is, the greater flux is and after the critical density, the opposite tendency is. In general, the three parameters have a great influence on the stability and jam state of the autonomous vehicles flow.

  13. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    Science.gov (United States)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of

  14. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  15. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  16. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of IMX 101 Components

    Science.gov (United States)

    2017-05-01

    2) TREECS™ has a tool for estimating soil Kd values given Koc, the soil tex- ture (percent sand, silt, and clay ), and the percent organic matter...respectively. Mulherin et al. (2005) studied the stability of NQ in three moist, unsatu- rated soils under laboratory conditions. This study yielded a range...of the uncertain input properties (degrada- tion rates and water-to- soil and water-to-sediment adsorption partitioning distribution coefficients, or

  17. Lower Mantle S-wave Velocity Model under the Western United States

    Science.gov (United States)

    Nelson, P.; Grand, S. P.

    2016-12-01

    Deep mantle plumes created by thermal instabilities at the core-mantle boundary has been an explanation for intraplate volcanism since the 1970's. Recently, broad slow velocity conduits in the lower mantle underneath some hotspots have been observed (French and Romanowicz, 2015), however the direct detection of a classical thin mantle plume using seismic tomography has remained elusive. Herein, we present a seismic tomography technique designed to image a deep mantle plume under the Yellowstone Hotspot located in the western United States utilizing SKS and SKKS waves in conjunction with finite frequency tomography. Synthetic resolution tests show the technique can resolve a 235 km diameter lower mantle plume with a 1.5% Gaussian velocity perturbation even if a realistic amount of random noise is added to the data. The Yellowstone Hotspot presents a unique opportunity to image a thin plume because it is the only hotspot with a purported deep origin that has a large enough aperture and density of seismometers to accurately sample the lower mantle at the length scales required to image a plume. Previous regional tomography studies largely based on S wave data have imaged a cylindrically shaped slow anomaly extending down to 900km under the hotspot, however they could not resolve it any deeper (Schmandt et al., 2010; Obrebski et al., 2010).To test if the anomaly extends deeper, we measured and inverted over 40,000 SKS and SKKS waves' travel times in two frequency bands recorded at 2400+ stations deployed during 2006-2012. Our preliminary model shows narrow slow velocity anomalies in the lower mantle with no fast anomalies. The slow anomalies are offset from the Yellowstone hotspot and may be diapirs rising from the base of the mantle.

  18. Temperature Field-Wind Velocity Field Optimum Control of Greenhouse Environment Based on CFD Model

    Directory of Open Access Journals (Sweden)

    Yongbo Li

    2014-01-01

    Full Text Available The computational fluid dynamics technology is applied as the environmental control model, which can include the greenhouse space. Basic environmental factors are set to be the control objects, the field information is achieved via the division of layers by height, and numerical characteristics of each layer are used to describe the field information. Under the natural ventilation condition, real-time requirements, energy consumption, and distribution difference are selected as index functions. The optimization algorithm of adaptive simulated annealing is used to obtain optimal control outputs. A comparison with full-open ventilation shows that the whole index can be reduced at 44.21% and found that a certain mutual exclusiveness exists between the temperature and velocity field in the optimal course. All the results indicate that the application of CFD model has great advantages to improve the control accuracy of greenhouse.

  19. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  20. Accurate calibration of the velocity-dependent one-scale model for domain walls

    International Nuclear Information System (INIS)

    Leite, A.M.M.; Martins, C.J.A.P.; Shellard, E.P.S.

    2013-01-01

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048 3 , and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c w =0.34±0.16 and k w =0.98±0.07, which are of higher precision than (but in agreement with) earlier estimates.

  1. Two-dimensional velocity models for paths from Pahute Mesa and Yucca Flat to Yucca Mountain

    International Nuclear Information System (INIS)

    Walck, M.C.; Phillips, J.S.

    1990-11-01

    Vertical acceleration recordings of 21 underground nuclear explosions recorded at stations at Yucca Mountain provide the data for development of three two-dimensional crystal velocity profiles for portions of the Nevada Test Site. Paths from Area 19, Area 20 (both Pahute Mesa), and Yucca Flat to Yucca Mountain have been modeled using asymptotic ray theory travel time and synthetic seismogram techniques. Significant travel time differences exist between the Yucca Flat and Pahute Mesa source areas; relative amplitude patterns at Yucca Mountain also shift with changing source azimuth. The three models, UNEPM1, UNEPM2, and UNEYF1, successfully predict the travel time and amplitude data for all three paths. 24 refs., 34 figs., 8 tabs

  2. CFD model of thermal and velocity conditions in a particular indoor environment

    Energy Technology Data Exchange (ETDEWEB)

    Mora Perez, Miguel; Lopez Patino, Gonzalo; Lopez Jimenez, P. Amparo [Hydraulic and Environmental Engineering Department, Universitat Politecnica de Valencia (Spain); Guillen Guillamon, Ignacio [Applied Physics Department, Universitat Politecnica de Valencia (Spain)

    2013-07-01

    The demand for maintaining high indoor environmental quality (IEQ) with the minimum energy consumption is rapidly increasing. In the recent years, several studies have been completed to investigate the impact of indoor environment factors on human comfort, health and energy efficiency. Therefore, the design of the thermal environment in any sort of room, specially offices, has huge economic consequences. In this paper, a particular analysis on the air temperature in a multi-task room environment is modeled, in order to represent the velocities and temperatures inside the room by using Computational Fluid Dynamics (CFD) techniques. This model will help to designers to analyze the thermal comfort regions inside the studied air volume and to visualize the whole temperatures inside the room, determining the effect of the fresh external incoming air in the internal air temperature.

  3. A study on the multi-dimensional spectral analysis for response of a piping model with two-seismic inputs

    International Nuclear Information System (INIS)

    Suzuki, K.; Sato, H.

    1975-01-01

    The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system

  4. Influence of the pore fluid on the phase velocity in bovine trabecular bone In Vitro: Prediction of the biot model

    Science.gov (United States)

    Lee, Kang Il

    2013-01-01

    The present study aims to investigate the influence of the pore fluid on the phase velocity in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 20 marrow-filled and water-filled bovine femoral trabecular bone samples. The mean phase velocities at frequencies between 0.6 and 1.2 MHz exhibited significant negative dispersions for both the marrow-filled and the water-filled samples. The magnitudes of the dispersions showed no significant differences between the marrow-filled and the water-filled samples. In contrast, replacement of marrow by water led to a mean increase in the phase velocity of 27 m/s at frequencies from 0.6 to 1.2 MHz. The theoretical phase velocities of the fast wave predicted by using the Biot model for elastic wave propagation in fluid-saturated porous media showed good agreements with the measurements.

  5. Governing equations for a seriated continuum: an unequal velocity model for two-phase flow

    International Nuclear Information System (INIS)

    Solbrig, C.W.; Hughes, E.D.

    1975-05-01

    The description of the flow of two-phase fluids is important in many engineering devices. Unexpected transient conditions which occur in these devices cannot, in general, be treated with single-component momentum equations. Instead, the use of momentum equations for each phase is necessary in order to describe the varied transient situations which can occur. These transient conditions can include phases moving in the opposite directions, such as steam moving upward and liquid moving downward, as well as phases moving in the same direction. The derivation of continuity and momentum equations for each phase and an overall energy equation for the mixture are presented. Terms describing interphase forces are described. A seriated (series of) continuum is distinguished from an interpenetrating medium by the representation of interphase friction with velocity differences in the former and velocity gradients in the latter. The seriated continuum also considers imbedded stationary solid surfaces such as occur in nuclear reactor cores. These stationary surfaces are taken into account with source terms. Sufficient constitutive equations are presented to form a complete set of equations. Methods are presented to show that all these coefficients are determinable from microscopic models and well known experimental results. Comparison of the present deviation with previous work is also given. The equations derived here may also be employed in certain multiphase, multicomponent flow applications. (U.S.)

  6. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  7. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    Science.gov (United States)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  8. Investigation of the velocity field in a full-scale model of a cerebral aneurysm

    International Nuclear Information System (INIS)

    Roloff, Christoph; Bordás, Róbert; Nickl, Rosa; Mátrai, Zsolt; Szaszák, Norbert; Szilárd, Szabó; Thévenin, Dominique

    2013-01-01

    Highlights: • We investigate flow fields inside a phantom model of a full-scale cerebral aneurysm. • An artificial blood fluid is used matching viscosity and density of real blood. • We present Particle Tracking results of fluorescent tracer particles. • Instantaneous model inlet velocity profiles and volume flow rates are derived. • Trajectory fields at three of six measurement planes are presented. -- Abstract: Due to improved and now widely used imaging methods in clinical surgery practise, detection of unruptured cerebral aneurysms becomes more and more frequent. For the selection and development of a low-risk and highly effective treatment option, the understanding of the involved hemodynamic mechanisms is of great importance. Computational Fluid Dynamics (CFD), in vivo angiographic imaging and in situ experimental investigations of flow behaviour are powerful tools which could deliver the needed information. Hence, the aim of this contribution is to experimentally characterise the flow in a full-scale phantom model of a realistic cerebral aneurysm. The acquired experimental data will then be used for a quantitative validation of companion numerical simulations. The experimental methodology relies on the large-field velocimetry technique PTV (Particle Tracking Velocimetry), processing high speed images of fluorescent tracer particles added to the flow of a blood-mimicking fluid. First, time-resolved planar PTV images were recorded at 4500 fps and processed by a complex, in-house algorithm. The resulting trajectories are used to identify Lagrangian flow structures, vortices and recirculation zones in two-dimensional measurement slices within the aneurysm sac. The instantaneous inlet velocity distribution, needed as boundary condition for the numerical simulations, has been measured with the same technique but using a higher frame rate of 20,000 fps in order to avoid ambiguous particle assignment. From this velocity distribution, the time

  9. Horizontal and Vertical Velocities Derived from the IDS Contribution to ITRF2014, and Comparisons with Geophysical Models

    Science.gov (United States)

    Moreaux, G.; Lemoine, F. G.; Argus, D. F.; Santamaria-Gomez, A.; Willis, P.; Soudarin, L.; Gravelle, M.; Ferrage, P.

    2016-01-01

    In the context of the 2014 realization of the International Terrestrial Reference Frame (ITRF2014), the International DORIS Service (IDS) has delivered to the IERS a set of 1140 weekly SINEX files including station coordinates and Earth orientation parameters, covering the time period from 1993.0 to 2015.0. From this set of weekly SINEX files, the IDS Combination Center estimated a cumulative DORIS position and velocity solution to obtain mean horizontal and vertical motion of 160 stations at 71 DORIS sites. The main objective of this study is to validate the velocities of the DORIS sites by comparison with external models or time series. Horizontal velocities are compared with two recent global plate models (GEODVEL 2010 and NNR-MORVEL56). Prior to the comparisons, DORIS horizontal velocities were corrected for Global Isostatic Adjustment (GIA) from the ICE-6G (VM5a) model. For more than half of the sites, the DORIS horizontal velocities differ from the global plate models by less than 2-3 mm/yr. For five of the sites (Arequipa, Dionysos/Gavdos, Manila, Santiago) with horizontal velocity differences wrt these models larger than 10 mm/yr, comparisons with GNSS estimates show the veracity of the DORIS motions. Vertical motions from the DORIS cumulative solution are compared with the vertical velocities derived from the latest GPS cumulative solution over the time span 1995.0-2014.0 from the University of La Rochelle (ULR6) solution at 31 co-located DORIS-GPS sites. These two sets of vertical velocities show a correlation coefficient of 0.83. Vertical differences are larger than 2 mm/yr at 23 percent of the sites. At Thule the disagreement is explained by fine-tuned DORIS discontinuities in line with the mass variations of outlet glaciers. Furthermore, the time evolution of the vertical time series from the DORIS station in Thule show similar trends to the GRACE equivalent water height.

  10. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  11. On the relationship between input parameters in two-mass vocal-fold model with acoustical coupling an signal parameters of the glottal flow

    NARCIS (Netherlands)

    van Hirtum, Annemie; Lopez, Ines; Hirschberg, Abraham; Pelorson, Xavier

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  12. On the relationship between input parameters in the two-mass vocal-fold model with acoustical coupling and signal parameters of the glottal flow

    NARCIS (Netherlands)

    Hirtum, van A.; Lopez Arteaga, I.; Hirschberg, A.; Pelorson, X.

    2003-01-01

    In this paper the sensitivity of the two-mass model with acoustical coupling to the model input-parameters is assessed. The model-output or the glottal volume air flow is characterised by signal-parameters in the time-domain. The influence of changing input-parameters on the signal-parameters is

  13. Kinematic Modeling of Normal Voluntary Mandibular Opening and Closing Velocity-Initial Study.

    Science.gov (United States)

    Gawriołek, Krzysztof; Gawriołek, Maria; Komosa, Marek; Piotrowski, Paweł R; Azer, Shereen S

    2015-06-01

    Determination and quantification of voluntary mandibular velocity movement has not been a thoroughly studied parameter of masticatory movement. This study attempted to objectively define kinematics of mandibular movement based on numerical (digital) analysis of the relations and interactions of velocity diagram records in healthy female individuals. Using a computerized mandibular scanner (K7 Evaluation Software), 72 diagrams of voluntary mandibular velocity movements (36 for opening, 36 for closing) for women with clinically normal motor and functional activities of the masticatory system were recorded. Multiple measurements were analyzed focusing on the curve for maximum velocity records. For each movement, the loop of temporary velocities was determined. The diagram was then entered into AutoCad calculation software where movement analysis was performed. The real maximum velocity values on opening (Vmax ), closing (V0 ), and average velocity values (Vav ) as well as movement accelerations (a) were recorded. Additionally, functional (A1-A2) and geometric (P1-P4) analysis of loop constituent phases were performed, and the relations between the obtained areas were defined. Velocity means and correlation coefficient values for various velocity phases were calculated. The Wilcoxon test produced the following maximum and average velocity results: Vmax = 394 ± 102, Vav = 222 ± 61 for opening, and Vmax = 409 ± 94, Vav = 225 ± 55 mm/s for closing. Both mandibular movement range and velocity change showed significant variability achieving the highest velocity in P2 phase. Voluntary mandibular velocity presents significant variations between healthy individuals. Maximum velocity is obtained when incisal separation is between 12.8 and 13.5 mm. An improved understanding of the patterns of normal mandibular movements may provide an invaluable diagnostic aid to pathological changes within the masticatory system. © 2014 by the American College of Prosthodontists.

  14. Velocity-based movement modeling for individual and population level inference.

    Directory of Open Access Journals (Sweden)

    Ephraim M Hanks

    Full Text Available Understanding animal movement and resource selection provides important information about the ecology of the animal, but an animal's movement and behavior are not typically constant in time. We present a velocity-based approach for modeling animal movement in space and time that allows for temporal heterogeneity in an animal's response to the environment, allows for temporal irregularity in telemetry data, and accounts for the uncertainty in the location information. Population-level inference on movement patterns and resource selection can then be made through cluster analysis of the parameters related to movement and behavior. We illustrate this approach through a study of northern fur seal (Callorhinus ursinus movement in the Bering Sea, Alaska, USA. Results show sex differentiation, with female northern fur seals exhibiting stronger response to environmental variables.

  15. Efficient scattering-angle enrichment for a nonlinear inversion of the background and perturbations components of a velocity model

    KAUST Repository

    Wu, Zedong

    2017-07-04

    Reflection-waveform inversion (RWI) can help us reduce the nonlinearity of the standard full-waveform inversion (FWI) by inverting for the background velocity model using the wave-path of a single scattered wavefield to an image. However, current RWI implementations usually neglect the multi-scattered energy, which will cause some artifacts in the image and the update of the background. To improve existing RWI implementations in taking multi-scattered energy into consideration, we split the velocity model into background and perturbation components, integrate them directly in the wave equation, and formulate a new optimization problem for both components. In this case, the perturbed model is no longer a single-scattering model, but includes all scattering. Through introducing a new cheap implementation of scattering angle enrichment, the separation of the background and perturbation components can be implemented efficiently. We optimize both components simultaneously to produce updates to the velocity model that is nonlinear with respect to both the background and the perturbation. The newly introduced perturbation model can absorb the non-smooth update of the background in a more consistent way. We apply the proposed approach on the Marmousi model with data that contain frequencies starting from 5 Hz to show that this method can converge to an accurate velocity starting from a linearly increasing initial velocity. Also, our proposed method works well when applied to a field data set.

  16. Modelling seasonal meltwater forcing of the velocity of land-terminating margins of the Greenland Ice Sheet

    Science.gov (United States)

    Koziol, Conrad P.; Arnold, Neil

    2018-03-01

    Surface runoff at the margin of the Greenland Ice Sheet (GrIS) drains to the ice-sheet bed, leading to enhanced summer ice flow. Ice velocities show a pattern of early summer acceleration followed by mid-summer deceleration due to evolution of the subglacial hydrology system in response to meltwater forcing. Modelling the integrated hydrological-ice dynamics system to reproduce measured velocities at the ice margin remains a key challenge for validating the present understanding of the system and constraining the impact of increasing surface runoff rates on dynamic ice mass loss from the GrIS. Here we show that a multi-component model incorporating supraglacial, subglacial, and ice dynamic components applied to a land-terminating catchment in western Greenland produces modelled velocities which are in reasonable agreement with those observed in GPS records for three melt seasons of varying melt intensities. This provides numerical support for the hypothesis that the subglacial system develops analogously to alpine glaciers and supports recent model formulations capturing the transition between distributed and channelized states. The model shows the growth of efficient conduit-based drainage up-glacier from the ice sheet margin, which develops more extensively, and further inland, as melt intensity increases. This suggests current trends of decadal-timescale slowdown of ice velocities in the ablation zone may continue in the near future. The model results also show a strong scaling between average summer velocities and melt season intensity, particularly in the upper ablation area. Assuming winter velocities are not impacted by channelization, our model suggests an upper bound of a 25 % increase in annual surface velocities as surface melt increases to 4 × present levels.

  17. Probing dark energy models with extreme pairwise velocities of galaxy clusters from the DEUS-FUR simulations

    Science.gov (United States)

    Bouillot, Vincent R.; Alimi, Jean-Michel; Corasaniti, Pier-Stefano; Rasera, Yann

    2015-06-01

    Observations of colliding galaxy clusters with high relative velocity probe the tail of the halo pairwise velocity distribution with the potential of providing a powerful test of cosmology. As an example it has been argued that the discovery of the Bullet Cluster challenges standard Λ cold dark matter (ΛCDM) model predictions. Halo catalogues from N-body simulations have been used to estimate the probability of Bullet-like clusters. However, due to simulation volume effects previous studies had to rely on a Gaussian extrapolation of the pairwise velocity distribution to high velocities. Here, we perform a detail analysis using the halo catalogues from the Dark Energy Universe Simulation Full Universe Runs (DEUS-FUR), which enables us to resolve the high-velocity tail of the distribution and study its dependence on the halo mass definition, redshift and cosmology. Building upon these results, we estimate the probability of Bullet-like systems in the framework of Extreme Value Statistics. We show that the tail of extreme pairwise velocities significantly deviates from that of a Gaussian, moreover it carries an imprint of the underlying cosmology. We find the Bullet Cluster probability to be two orders of magnitude larger than previous estimates, thus easing the tension with the ΛCDM model. Finally, the comparison of the inferred probabilities for the different DEUS-FUR cosmologies suggests that observations of extreme interacting clusters can provide constraints on dark energy models complementary to standard cosmological tests.

  18. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  19. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  20. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2018-03-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  1. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  2. Addendum: ``The Dynamics of M15: Observations of the Velocity Dispersion Profile and Fokker-Planck Models'' (ApJ, 481, 267 [1997])

    Science.gov (United States)

    Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Murphy, B. W.; Seitzer, P. O.; Callanan, P. J.; Rutten, R. G. M.; Charles, P. A.

    2003-03-01

    It has recently come to our attention that there are axis scale errors in three of the figures presented in Dull et al. (1997, hereafter D97). This paper presented Fokker-Planck models for the collapsed-core globular cluster M15 that include a dense, centrally concentrated population of neutron stars and massive white dwarfs. These models do not include a central black hole. Figure 12 of D97, which presents the predicted mass-to-light profile, is of particular interest, since it was used by Gerssen et al. (2002) as an input to their Jeans equation analysis of the Hubble Space Telescope (HST) STIS velocity measurements reported by van der Marel et al. (2002). On the basis of the original, incorrect version of Figure 12, Gerssen et al. (2002) concluded that the D97 models can fit the new data only with the addition of an intermediate-mass black hole. However, this is counter to our previous finding, shown in Figure 6 of D97, that the Fokker-Planck models predict the sort of moderately rising velocity dispersion profile that Gerssen et al. (2002) infer from the new data. Baumgardt et al. (2003) have independently noted this apparent inconsistency. We appreciate the thoughtful cooperation of Roeland van der Marel in resolving this issue. Using our corrected version of Figure 12 (see below), Gerssen et al. (2003) now find that the velocity dispersion profile that they infer from the D97 mass-to-light ratio profile is entirely consistent with the velocity dispersion profile presented in Figure 6 of D97. Gerssen et al. (2003) further find that there is no statistically significant difference between the fit to the van der Marel et al. (2002) velocity measurements provided by the D97 intermediate-phase model and that provided by their model, which supplements this D97 model with a 1.7+2.7-1.7×103Msolar black hole. Thus, the choice between models with and without black holes will require additional model predictions and observational tests. We present corrected versions of

  3. Modeling and sliding mode predictive control of the ultra-supercritical boiler-turbine system with uncertainties and input constraints.

    Science.gov (United States)

    Tian, Zhen; Yuan, Jingqi; Zhang, Xiang; Kong, Lei; Wang, Jingcheng

    2018-05-01

    The coordinated control system (CCS) serves as an important role in load regulation, efficiency optimization and pollutant reduction for coal-fired power plants. The CCS faces with tough challenges, such as the wide-range load variation, various uncertainties and constraints. This paper aims to improve the load tacking ability and robustness for boiler-turbine units under wide-range operation. To capture the key dynamics of the ultra-supercritical boiler-turbine system, a nonlinear control-oriented model is developed based on mechanism analysis and model reduction techniques, which is validated with the history operation data of a real 1000 MW unit. To simultaneously address the issues of uncertainties and input constraints, a discrete-time sliding mode predictive controller (SMPC) is designed with the dual-mode control law. Moreover, the input-to-state stability and robustness of the closed-loop system are proved. Simulation results are presented to illustrate the effectiveness of the proposed control scheme, which achieves good tracking performance, disturbance rejection ability and compatibility to input constraints. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  5. Scalar and joint velocity-scalar PDF modelling of near-wall turbulent heat transfer

    International Nuclear Information System (INIS)

    Pozorski, Jacek; Waclawczyk, Marta; Minier, Jean-Pierre

    2004-01-01

    The temperature field in a heated turbulent flow is considered as a dynamically passive scalar. The probability density function (PDF) method with down to the wall integration is explored and new modelling proposals are put forward, including the explicit account for the molecular transport terms. Two variants of the approach are considered: first, the scalar PDF method with the use of externally-provided turbulence statistics; and second, the joint (stand-alone) velocity-scalar PDF method where a near-wall model for dynamical variables is coupled with a model for temperature. The closure proposals are formulated in the Lagrangian setting and resulting stochastic evolution equations are solved with a Monte Carlo method. The near-wall region of a heated channel flow is taken as a validation case; the second-order thermal statistics are of a particular interest. The PDF computation results agree reasonably with available DNS data. The sensitivity of results to the molecular Prandtl number and to the thermal wall boundary condition is accounted for

  6. Site-response Estimation by 1D Heterogeneous Velocity Model using Borehole Log and its Relationship to Damping Factor

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2014-01-01

    In the Niigata area, which suffered from several large earthquakes such as the 2007 Chuetsu-oki earthquake, geographical observation that elucidates the S-wave structure of the underground is advancing. Modeling of S-wave velocity structure in the subsurface is underway to enable simulation of long-period ground motion. The one-dimensional velocity model by inverse analysis of micro-tremors is sufficiently appropriate for long-period site response but not for short-period, which is important for ground motion evaluation at NPP sites. The high-frequency site responses may be controlled by the strength of heterogeneity of underground structure because the heterogeneity of the 1D model plays an important role in estimating high-frequency site responses and is strongly related to the damping factor of the 1D layered velocity model. (author)

  7. Improved Stabilization Conditions for Nonlinear Systems with Input and State Delays via T-S Fuzzy Model

    Directory of Open Access Journals (Sweden)

    Chang Che

    2018-01-01

    Full Text Available This paper focuses on the problem of nonlinear systems with input and state delays. The considered nonlinear systems are represented by Takagi-Sugeno (T-S fuzzy model. A new state feedback control approach is introduced for T-S fuzzy systems with input delay and state delays. A new Lyapunov-Krasovskii functional is employed to derive less conservative stability conditions by incorporating a recently developed Wirtinger-based integral inequality. Based on the Lyapunov stability criterion, a series of linear matrix inequalities (LMIs are obtained by using the slack variables and integral inequality, which guarantees the asymptotic stability of the closed-loop system. Several numerical examples are given to show the advantages of the proposed results.

  8. Velocity Models of the Upper Mantle Beneath the MER, Somali Platform, and Ethiopian Highlands from Body Wave Tomography

    Science.gov (United States)

    Hariharan, A.; Keranen, K. M.; Alemayehu, S.; Ayele, A.; Bastow, I. D.; Eilon, Z.

    2016-12-01

    The Main Ethiopian Rift (MER) presents a unique opportunity to improve our understanding of an active continental rift. Here we use body wave tomography to generate compressional and shear wave velocity models of the region beneath the rift. The models help us understand the rifting process over the broader region around the MER, extending the geographic region beyond that captured in past studies. We use differential arrival times of body waves from teleseismic earthquakes and multi-channel cross correlation to generate travel time residuals relative to the global IASP91 1-d velocity model. The events used for the tomographic velocity model include 200 teleseismic earthquakes with moment magnitudes greater than 5.5 from our recent 2014-2016 deployment in combination with 200 earthquakes from the earlier EBSE and EAGLE deployments (Bastow et al. 2008). We use the finite-frequency tomography analysis of Schmandt et al. (2010), which uses a first Fresnel zone paraxial approximation to the Born theoretical kernel with spatial smoothing and model norm damping in an iterative LSQR algorithm. Results show a broad, slow region beneath the rift with a distinct low-velocity anomaly beneath the northwest shoulder. This robust and well-resolved low-velocity anomaly is visible at a range of depths beneath the Ethiopian plateau, within the footprint of Oligocene flood basalts, and near surface expressions of diking. We interpret this anomaly as a possible plume conduit, or a low-velocity finger rising from a deeper, larger plume. Within the rift, results are consistent with previous work, exhibiting rift segmentation and low-velocities beneath the rift valley.

  9. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  10. Gravel-Sand-Clay Mixture Model for Predictions of Permeability and Velocity of Unconsolidated Sediments

    Science.gov (United States)

    Konishi, C.

    2014-12-01

    Gravel-sand-clay mixture model is proposed particularly for unconsolidated sediments to predict permeability and velocity from volume fractions of the three components (i.e. gravel, sand, and clay). A well-known sand-clay mixture model or bimodal mixture model treats clay contents as volume fraction of the small particle and the rest of the volume is considered as that of the large particle. This simple approach has been commonly accepted and has validated by many studies before. However, a collection of laboratory measurements of permeability and grain size distribution for unconsolidated samples show an impact of presence of another large particle; i.e. only a few percent of gravel particles increases the permeability of the sample significantly. This observation cannot be explained by the bimodal mixture model and it suggests the necessity of considering the gravel-sand-clay mixture model. In the proposed model, I consider the three volume fractions of each component instead of using only the clay contents. Sand becomes either larger or smaller particles in the three component mixture model, whereas it is always the large particle in the bimodal mixture model. The total porosity of the two cases, one is the case that the sand is smaller particle and the other is the case that the sand is larger particle, can be modeled independently from sand volume fraction by the same fashion in the bimodal model. However, the two cases can co-exist in one sample; thus, the total porosity of the mixed sample is calculated by weighted average of the two cases by the volume fractions of gravel and clay. The effective porosity is distinguished from the total porosity assuming that the porosity associated with clay is zero effective porosity. In addition, effective grain size can be computed from the volume fractions and representative grain sizes for each component. Using the effective porosity and the effective grain size, the permeability is predicted by Kozeny-Carman equation

  11. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    Science.gov (United States)

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly

  12. Effect of manure vs. fertilizer inputs on productivity of forage crop models.

    Science.gov (United States)

    Annicchiarico, Giovanni; Caternolo, Giovanni; Rossi, Emanuela; Martiniello, Pasquale

    2011-06-01

    Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF) were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV). The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha(-1), respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha(-1) of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha(-1) under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  13. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  14. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    Science.gov (United States)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  15. Effect of Low Co-flow Air Velocity on Hydrogen-air Non-premixed Turbulent Flame Model

    Directory of Open Access Journals (Sweden)

    Noor Mohsin Jasim

    2017-08-01

    Full Text Available The aim of this paper is to provide information concerning the effect of low co-flow velocity on the turbulent diffusion flame for a simple type of combustor, a numerical simulated cases of turbulent diffusion hydrogen-air flame are performed. The combustion model used in this investigation is based on chemical equilibrium and kinetics to simplify the complexity of the chemical mechanism. Effects of increased co-flowing air velocity on temperature, velocity components (axial and radial, and reactants have been investigated numerically and examined. Numerical results for temperature are compared with the experimental data. The comparison offers a good agreement. All numerical simulations have been performed using the Computational Fluid Dynamics (CFD commercial code FLUENT. A comparison among the various co-flow air velocities, and their effects on flame behavior and temperature fields are presented.

  16. Seismic Travel Time Tomography in Modeling Low Velocity Anomalies between the Boreholes

    Science.gov (United States)

    Octova, A.; Sule, R.

    2018-04-01

    Travel time cross-hole seismic tomography is applied to describing the structure of the subsurface. The sources are placed at one borehole and some receivers are placed in the others. First arrival travel time data that received by each receiver is used as the input data in seismic tomography method. This research is devided into three steps. The first step is reconstructing the synthetic model based on field parameters. Field parameters are divided into 24 receivers and 45 receivers. The second step is applying inversion process for the field data that consists of five pairs bore holes. The last step is testing quality of tomogram with resolution test. Data processing using FAST software produces an explicit shape and resemble the initial model reconstruction of synthetic model with 45 receivers. The tomography processing in field data indicates cavities in several place between the bore holes. Cavities are identified on BH2A-BH1, BH4A-BH2A and BH4A-BH5 with elongated and rounded structure. In resolution tests using a checker-board, anomalies still can be identified up to 2 meter x 2 meter size. Travel time cross-hole seismic tomography analysis proves this mothod is very good to describing subsurface structure and boundary layer. Size and anomalies position can be recognized and interpreted easily.

  17. Modelling and Simulation of Tensile Fracture in High Velocity Compacted Metal Powder

    International Nuclear Information System (INIS)

    Jonsen, P.; Haeggblad, H.-A.

    2007-01-01

    In cold uniaxial powder compaction, powder is formed into a desired shape with rigid tools and a die. After pressing, but before sintering, the compacted powder is called green body. A critical property in the metal powder pressing process is the mechanical properties of the green body. Beyond a green body free from defects, desired properties are high strength and uniform density. High velocity compaction (HVC) using a hydraulic operated hammer is a production method to form powder utilizing a shock wave. Pre-alloyed water atomised iron powder has been HVC-formed into circular discs with high densities. The diametral compression test also called the Brazilian disc test is an established method to measure tensile strength in low strength material like e.g. rock, concrete, polymers and ceramics. During the test a thin disc is compressed across the diameter to failure. The compression induces a tensile stress perpendicular to the compressed diameter. In this study the test have been used to study crack initiation and the tensile fracture process of HVC-formed metal powder discs with a relative density of 99%. A fictitious crack model controlled by a stress versus crack-width relationship is utilized to model green body cracking. Tensile strength is used as a failure condition and limits the stress in the fracture interface. The softening rate of the model is obtained from the corresponding rate of the dissipated energy. The deformation of the powder material is modelled with an elastic-plastic Cap model. The characteristics of the tensile fracture development of the central crack in a diametrically loaded specimen is numerically studied with a three dimensional finite element simulation. Results from the finite element simulation of the diametral compression test shows that it is possible to simulate fracturing of HVC-formed powder. Results from the simulation agree reasonably with experiments

  18. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  19. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    Science.gov (United States)

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  20. Application of one-dimensional model to calculate water velocity distributions over elastic elements simulating Canadian waterweed plants (Elodea Canadensis)

    Science.gov (United States)

    Kubrak, Elżbieta; Kubrak, Janusz; Rowiński, Paweł

    2013-02-01

    One-dimensional model for vertical profiles of longitudinal velocities in open-channel flows is verified against laboratory data obtained in an open channel with artificial plants. Those plants simulate Canadian waterweed which in nature usually forms dense stands that reach all the way to the water surface. The model works particularly well for densely spaced plants.

  1. Two-phase modeling of DDT: Structure of the velocity-relaxation zone

    International Nuclear Information System (INIS)

    Kapila, A.K.; Son, S.F.; Bdzil, J.B.; Menikoff, R.; Stewart, D.S.

    1997-01-01

    The structure of the velocity relaxation zone in a hyperbolic, nonconservative, two-phase model is examined in the limit of large drag, and in the context of the problem of deflagration-to-detonation transition in a granular explosive. The primary motivation for the study is the desire to relate the end states across the relaxation zone, which can then be treated as a discontinuity in a reduced, equivelocity model, that is computationally more efficient than its parent. In contrast to a conservative system, where end states across thin zones of rapid variation are determined principally by algebraic statements of conservation, the nonconservative character of the present system requires an explicit consideration of the structure. Starting with the minimum admissible wave speed, the structure is mapped out as the wave speed increases. Several critical wave speeds corresponding to changes in the structure are identified. The archetypal structure is partly dispersed, monotonic, and involves conventional hydrodynamic shocks in one or both phases. The picture is reminiscent of, but more complex than, what is observed in such (simpler) two-phase media as a dusty gas. copyright 1997 American Institute of Physics

  2. A fast iterative model for discrete velocity calculations on triangular grids

    International Nuclear Information System (INIS)

    Szalmas, Lajos; Valougeorgis, Dimitris

    2010-01-01

    A fast synthetic type iterative model is proposed to speed up the slow convergence of discrete velocity algorithms for solving linear kinetic equations on triangular lattices. The efficiency of the scheme is verified both theoretically by a discrete Fourier stability analysis and computationally by solving a rarefied gas flow problem. The stability analysis of the discrete kinetic equations yields the spectral radius of the typical and the proposed iterative algorithms and reveal the drastically improved performance of the latter one for any grid resolution. This is the first time that stability analysis of the full discrete kinetic equations related to rarefied gas theory is formulated, providing the detailed dependency of the iteration scheme on the discretization parameters in the phase space. The corresponding characteristics of the model deduced by solving numerically the rarefied gas flow through a duct with triangular cross section are in complete agreement with the theoretical findings. The proposed approach may open a way for fast computation of rarefied gas flows on complex geometries in the whole range of gas rarefaction including the hydrodynamic regime.

  3. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  4. A P-wave velocity model of the upper crust of the Sannio region (Southern Apennines, Italy

    Directory of Open Access Journals (Sweden)

    M. Cocco

    1998-06-01

    Full Text Available This paper describes the results of a seismic refraction profile conducted in October 1992 in the Sannio region, Southern Italy, to obtain a detailed P-wave velocity model of the upper crust. The profile, 75 km long, extended parallel to the Apenninic chain in a region frequently damaged in historical time by strong earthquakes. Six shots were fired at five sites and recorded by a number of seismic stations ranging from 41 to 71 with a spacing of 1-2 km along the recording line. We used a two-dimensional raytracing technique to model travel times and amplitudes of first and second arrivals. The obtained P-wave velocity model has a shallow structure with strong lateral variations in the southern portion of the profile. Near surface sediments of the Tertiary age are characterized by seismic velocities in the 3.0-4.1 km/s range. In the northern part of the profile these deposits overlie a layer with a velocity of 4.8 km/s that has been interpreted as a Mesozoic sedimentary succession. A high velocity body, corresponding to the limestones of the Western Carbonate Platform with a velocity of 6 km/s, characterizes the southernmost part of the profile at shallow depths. At a depth of about 4 km the model becomes laterally homogeneous showing a continuous layer with a thickness in the 3-4 km range and a velocity of 6 km/s corresponding to the Meso-Cenozoic limestone succession of the Apulia Carbonate Platform. This platform appears to be layered, as indicated by an increase in seismic velocity from 6 to 6.7 km/s at depths in the 6-8 km range, that has been interpreted as a lithological transition from limestones to Triassic dolomites and anhydrites of the Burano formation. A lower P-wave velocity of about 5.0-5.5 km/s is hypothesized at the bottom of the Apulia Platform at depths ranging from 10 km down to 12.5 km; these low velocities could be related to Permo-Triassic siliciclastic deposits of the Verrucano sequence drilled at the bottom of the Apulia

  5. A simple technique for obtaining future climate data inputs for natural resource models

    Science.gov (United States)

    Those conducting impact studies using natural resource models need to be able to quickly and easily obtain downscaled future climate data from multiple models, scenarios, and timescales for multiple locations. This paper describes a method of quickly obtaining future climate data over a wide range o...

  6. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...

  7. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  8. Evaluating meteo marine climatic model inputs for the investigation of coastal hydrodynamics

    Science.gov (United States)

    Bellafiore, D.; Bucchignani, E.; Umgiesser, G.

    2010-09-01

    One of the major aspects discussed in the recent works on climate change is how to provide information from the global scale to the local one. In fact the influence of sea level rise and changes in the meteorological conditions due to climate change in strategic areas like the coastal zone is at the base of the well known mitigation and risk assessment plans. The investigation of the coastal zone hydrodynamics, from a modeling point of view, has been the field for the connection between hydraulic models and ocean models and, in terms of process studies, finite element models have demonstrated their suitability in the reproduction of complex coastal morphology and in the capability to reproduce different spatial scale hydrodynamic processes. In this work the connection between two different model families, the climate models and the hydrodynamic models usually implemented for process studies, is tested. Together, they can be the most suitable tool for the investigation of climate change on coastal systems. A finite element model, SHYFEM (Shallow water Hydrodynamic Finite Element Model), is implemented on the Adriatic Sea, to investigate the effect of wind forcing datasets produced by different downscaling from global climate models in terms of surge and its coastal effects. The wind datasets are produced by the regional climate model COSMO-CLM (CIRA), and by EBU-POM model (Belgrade University), both downscaling from ECHAM4. As a first step the downscaled wind datasets, that have different spatial resolutions, has been analyzed for the period 1960-1990 to compare what is their capability to reproduce the measured wind statistics in the coastal zone in front of the Venice Lagoon. The particularity of the Adriatic Sea meteo climate is connected with the influence of the orography in the strengthening of winds like Bora, from North-East. The increase in spatial resolution permits the more resolved wind dataset to better reproduce meteorology and to provide a more

  9. 3-D Velocity Model of the Coachella Valley, Southern California Based on Explosive Shots from the Salton Seismic Imaging Project

    Science.gov (United States)

    Persaud, P.; Stock, J. M.; Fuis, G. S.; Hole, J. A.; Goldman, M.; Scheirer, D. S.

    2014-12-01

    We have analyzed explosive shot data from the 2011 Salton Seismic Imaging Project (SSIP) across a 2-D seismic array and 5 profiles in the Coachella Valley to produce a 3-D P-wave velocity model that will be used in calculations of strong ground shaking. Accurate maps of seismicity and active faults rely both on detailed geological field mapping and a suitable velocity model to accurately locate earthquakes. Adjoint tomography of an older version of the SCEC 3-D velocity model shows that crustal heterogeneities strongly influence seismic wave propagation from moderate earthquakes (Tape et al., 2010). These authors improve the crustal model and subsequently simulate the details of ground motion at periods of 2 s and longer for hundreds of ray paths. Even with improvements such as the above, the current SCEC velocity model for the Salton Trough does not provide a match of the timing or waveforms of the horizontal S-wave motions, which Wei et al. (2013) interpret as caused by inaccuracies in the shallow velocity structure. They effectively demonstrate that the inclusion of shallow basin structure improves the fit in both travel times and waveforms. Our velocity model benefits from the inclusion of known location and times of a subset of 126 shots detonated over a 3-week period during the SSIP. This results in an improved velocity model particularly in the shallow crust. In addition, one of the main challenges in developing 3-D velocity models is an uneven stations-source distribution. To better overcome this challenge, we also include the first arrival times of the SSIP shots at the more widely spaced Southern California Seismic Network (SCSN) in our inversion, since the layout of the SSIP is complementary to the SCSN. References: Tape, C., et al., 2010, Seismic tomography of the Southern California crust based on spectral-element and adjoint methods: Geophysical Journal International, v. 180, no. 1, p. 433-462. Wei, S., et al., 2013, Complementary slip distributions

  10. Modeling skin temperature to assess the effect of air velocity to mitigate heat stress among growing pigs

    DEFF Research Database (Denmark)

    Bjerg, Bjarne; Pedersen, Poul; Morsing, Svend

    2017-01-01

    It is generally accepted that increased air velocity can help to mitigate heat stress in livestock housing, however, it is not fully clear how much it helps and significant uncertainties exists when the air temperature approaches the animal body temperature. This study aims to develop a skin...... temperature model to generated data for determining the potential effect of air velocity to mitigate heat stress among growing pigs housed in warm environment. The model calculates the skin temperature as function of body temperature, air temperature and the resistances for heat transfer from the body...

  11. Modeling spray drift and runoff-related inputs of pesticides to receiving water.

    Science.gov (United States)

    Zhang, Xuyang; Luo, Yuzhou; Goh, Kean S

    2018-03-01

    Pesticides move to surface water via various pathways including surface runoff, spray drift and subsurface flow. Little is known about the relative contributions of surface runoff and spray drift in agricultural watersheds. This study develops a modeling framework to address the contribution of spray drift to the total loadings of pesticides in receiving water bodies. The modeling framework consists of a GIS module for identifying drift potential, the AgDRIFT model for simulating spray drift, and the Soil and Water Assessment Tool (SWAT) for simulating various hydrological and landscape processes including surface runoff and transport of pesticides. The modeling framework was applied on the Orestimba Creek Watershed, California. Monitoring data collected from daily samples were used for model evaluation. Pesticide mass deposition on the Orestimba Creek ranged from 0.08 to 6.09% of applied mass. Monitoring data suggests that surface runoff was the major pathway for pesticide entering water bodies, accounting for 76% of the annual loading; the rest 24% from spray drift. The results from the modeling framework showed 81 and 19%, respectively, for runoff and spray drift. Spray drift contributed over half of the mass loading during summer months. The slightly lower spray drift contribution as predicted by the modeling framework was mainly due to SWAT's under-prediction of pesticide mass loading during summer and over-prediction of the loading during winter. Although model simulations were associated with various sources of uncertainties, the overall performance of the modeling framework was satisfactory as evaluated by multiple statistics: for simulation of daily flow, the Nash-Sutcliffe Efficiency Coefficient (NSE) ranged from 0.61 to 0.74 and the percent bias (PBIAS) runoff in receiving waters and the design of management practices for mitigating pesticide exposure within a watershed. Published by Elsevier Ltd.

  12. Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary-based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012

  13. Analytical model for advective-dispersive transport involving flexible boundary inputs, initial distributions and zero-order productions

    Science.gov (United States)

    Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping

    2017-11-01

    A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.

  14. Effective property determination for input to a geostatistical model of regional groundwater flow: Wellenberg T→K

    International Nuclear Information System (INIS)

    Lanyon, G.W.; Marschall, P.; Vomvoris, S.; Jaquet, O.; Mazurek, M.

    1998-01-01

    This paper describes the methodology used to estimate effective hydraulic properties for input into a regional geostatistical model of groundwater flow at the Wellenberg site in Switzerland. The methodology uses a geologically-based discrete fracture network model to calculate effective hydraulic properties for 100m blocks along each borehole. A description of the most transmissive features (Water Conducting Features or WCFs) in each borehole is used to determine local transmissivity distributions which are combined with descriptions of WCF extent, orientation and channelling to create fracture network models. WCF geometry is dependent on the class of WCF. WCF classes are defined for each type of geological structure associated with identified borehole inflows. Local to each borehole, models are conditioned on the observed transmissivity and occurrence of WCFs. Multiple realisations are calculated for each 100m block over approximately 400m of borehole. The results from the numerical upscaling are compared with conservative estimates of hydraulic conductivity. Results from unconditioned models are also compared to identify the consequences of conditioning and interval of boreholes that appear to be atypical. An inverse method is also described by which realisations of the geostatistical model can be used to condition discrete fracture network models away from the boreholes. The method can be used as a verification of the modelling approach by prediction of data at borehole locations. Applications of the models to estimation of post-closure repository performance, including cavern inflow and seal zone modelling, are illustrated

  15. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  16. Sensitivity of modeled estuarine circulation to spatial and temporal resolution of input meteorological forcing of a cold frontal passage

    Science.gov (United States)

    Weaver, Robert J.; Taeb, Peyman; Lazarus, Steven; Splitt, Michael; Holman, Bryan P.; Colvin, Jeffrey

    2016-12-01

    In this study, a four member ensemble of meteorological forcing is generated using the Weather Research and Forecasting (WRF) model in order to simulate a frontal passage event that impacted the Indian River Lagoon (IRL) during March 2015. The WRF model is run to provide high and low, spatial (0.005° and 0.1°) and temporal (30 min and 6 h) input wind and pressure fields. The four member ensemble is used to force the Advanced Circulation model (ADCIRC) coupled with Simulating Waves Nearshore (SWAN) and compute the hydrodynamic and wave response. Results indicate that increasing the spatial resolution of the meteorological forcing has a greater impact on the results than increasing the temporal resolution in coastal systems like the IRL where the length scales are smaller than the resolution of the operational meteorological model being used to generate the forecast. Changes in predicted water elevations are due in part to the upwind and downwind behavior of the input wind forcing. The significant wave height is more sensitive to the meteorological forcing, exhibited by greater ensemble spread throughout the simulation. It is important that the land mask, seen by the meteorological model, is representative of the geography of the coastal estuary as resolved by the hydrodynamic model. As long as the temporal resolution of the wind field captures the bulk characteristics of the frontal passage, computational resources should be focused so as to ensure that the meteorological model resolves the spatial complexities, such as the land-water interface, that drive the land use responsible for dynamic downscaling of the winds.

  17. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  18. Developing regionalized models of lithospheric thickness and velocity structure across Eurasia and the Middle East from jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities

    Energy Technology Data Exchange (ETDEWEB)

    Julia, J; Nyblade, A; Hansen, S; Rodgers, A; Matzel, E

    2009-07-06

    In this project, we are developing models of lithospheric structure for a wide variety of tectonic regions throughout Eurasia and the Middle East by regionalizing 1D velocity models obtained by jointly inverting P-wave and S-wave receiver functions with Rayleigh wave group and phase velocities. We expect the regionalized velocity models will improve our ability to predict travel-times for local and regional phases, such as Pg, Pn, Sn and Lg, as well as travel-times for body-waves at upper mantle triplication distances in both seismic and aseismic regions of Eurasia and the Middle East. We anticipate the models will help inform and strengthen ongoing and future efforts within the NNSA labs to develop 3D velocity models for Eurasia and the Middle East, and will assist in obtaining model-based predictions where no empirical data are available and for improving locations from sparse networks using kriging. The codes needed to conduct the joint inversion of P-wave receiver functions (PRFs), S-wave receiver functions (SRFs), and dispersion velocities have already been assembled as part of ongoing research on lithospheric structure in Africa. The methodology has been tested with synthetic 'data' and case studies have been investigated with data collected at an open broadband stations in South Africa. PRFs constrain the size and S-P travel-time of seismic discontinuities in the crust and uppermost mantle, SRFs constrain the size and P-S travel-time of the lithosphere-asthenosphere boundary, and dispersion velocities constrain average S-wave velocity within frequency-dependent depth-ranges. Preliminary results show that the combination yields integrated 1D velocity models local to the recording station, where the discontinuities constrained by the receiver functions are superimposed to a background velocity model constrained by the dispersion velocities. In our first year of this project we will (i) generate 1D velocity models for open broadband seismic stations

  19. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  20. Modeling microstructure of incudostapedial joint and the effect on cochlear input

    Science.gov (United States)

    Gan, Rong Z.; Wang, Xuelin

    2015-12-01

    The incudostapedial joint (I