WorldWideScience

Sample records for modeling evaluation number

  1. Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models

    Directory of Open Access Journals (Sweden)

    Bao Sheng Loe

    2018-04-01

    Full Text Available This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG. The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1 short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s (LLTM were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties.

  2. Fermion number in supersymmetric models

    International Nuclear Information System (INIS)

    Mainland, G.B.; Tanaka, K.

    1975-01-01

    The two known methods for introducing a conserved fermion number into supersymmetric models are discussed. While the introduction of a conserved fermion number often requires that the Lagrangian be massless or that bosons carry fermion number, a model is discussed in which masses can be introduced via spontaneous symmetry breaking and fermion number is conserved at all stages without assigning fermion number to bosons. (U.S.)

  3. From Concurrency Models to Numbers

    DEFF Research Database (Denmark)

    Hermanns, Holger; Zhang, Lijun

    2011-01-01

    Discrete-state Markov processes are very common models used for performance and dependability evaluation of, for example, distributed information and communication systems. Over the last fifteen years, compositional model construction and model checking algorithms have been studied for these proc...

  4. Evaluation and modelling of the size fractionated aerosol particle number concentration measurements nearby a major road in Helsinki - Part I: Modelling results within the LIPIKA project

    Science.gov (United States)

    Pohjola, M. A.; Pirjola, L.; Karppinen, A.; Härkönen, J.; Korhonen, H.; Hussein, T.; Ketzel, M.; Kukkonen, J.

    2007-08-01

    A field measurement campaign was conducted near a major road "Itäväylä" in an urban area in Helsinki in 17-20 February 2003. Aerosol measurements were conducted using a mobile laboratory "Sniffer" at various distances from the road, and at an urban background location. Measurements included particle size distribution in the size range of 7 nm-10 μm (aerodynamic diameter) by the Electrical Low Pressure Impactor (ELPI) and in the size range of 3-50 nm (mobility diameter) by Scanning Mobility Particle Sizer (SMPS), total number concentration of particles larger than 3 nm detected by an ultrafine condensation particle counter (UCPC), temperature, relative humidity, wind speed and direction, driving route of the mobile laboratory, and traffic density on the studied road. In this study, we have compared measured concentration data with the predictions of the road network dispersion model CAR-FMI used in combination with an aerosol process model MONO32. For model comparison purposes, one of the cases was additionally computed using the aerosol process model UHMA, combined with the CAR-FMI model. The vehicular exhaust emissions, and atmospheric dispersion and transformation of fine and ultrafine particles was evaluated within the distance scale of 200 m (corresponding to a time scale of a couple of minutes). We computed the temporal evolution of the number concentrations, size distributions and chemical compositions of various particle size classes. The atmospheric dilution rate of particles is obtained from the roadside dispersion model CAR-FMI. Considering the evolution of total number concentration, dilution was shown to be the most important process. The influence of coagulation and condensation on the number concentrations of particle size modes was found to be negligible on this distance scale. Condensation was found to affect the evolution of particle diameter in the two smallest particle modes. The assumed value of the concentration of condensable organic

  5. Evaluation and modelling of the size fractionated aerosol particle number concentration measurements nearby a major road in Helsinki ─ Part I: Modelling results within the LIPIKA project

    Directory of Open Access Journals (Sweden)

    M. Ketzel

    2007-08-01

    Full Text Available A field measurement campaign was conducted near a major road "Itäväylä" in an urban area in Helsinki in 17–20 February 2003. Aerosol measurements were conducted using a mobile laboratory "Sniffer" at various distances from the road, and at an urban background location. Measurements included particle size distribution in the size range of 7 nm–10 μm (aerodynamic diameter by the Electrical Low Pressure Impactor (ELPI and in the size range of 3–50 nm (mobility diameter by Scanning Mobility Particle Sizer (SMPS, total number concentration of particles larger than 3 nm detected by an ultrafine condensation particle counter (UCPC, temperature, relative humidity, wind speed and direction, driving route of the mobile laboratory, and traffic density on the studied road. In this study, we have compared measured concentration data with the predictions of the road network dispersion model CAR-FMI used in combination with an aerosol process model MONO32. For model comparison purposes, one of the cases was additionally computed using the aerosol process model UHMA, combined with the CAR-FMI model. The vehicular exhaust emissions, and atmospheric dispersion and transformation of fine and ultrafine particles was evaluated within the distance scale of 200 m (corresponding to a time scale of a couple of minutes. We computed the temporal evolution of the number concentrations, size distributions and chemical compositions of various particle size classes. The atmospheric dilution rate of particles is obtained from the roadside dispersion model CAR-FMI. Considering the evolution of total number concentration, dilution was shown to be the most important process. The influence of coagulation and condensation on the number concentrations of particle size modes was found to be negligible on this distance scale. Condensation was found to affect the evolution of particle diameter in the two smallest particle modes. The assumed value of the concentration of

  6. Evaluating the number of stages in development of squamous cell and adenocarcinomas across cancer sites using human population-based cancer modeling.

    Directory of Open Access Journals (Sweden)

    Julia Kravchenko

    Full Text Available BACKGROUND: Adenocarcinomas (ACs and squamous cell carcinomas (SCCs differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. METHODOLOGY/PRINCIPAL FINDINGS: Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003 were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC. When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri, the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes. Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. CONCLUSIONS/SIGNIFICANCE: A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype

  7. Evaluating the number of stages in development of squamous cell and adenocarcinomas across cancer sites using human population-based cancer modeling.

    Science.gov (United States)

    Kravchenko, Julia; Akushevich, Igor; Abernethy, Amy P; Lyerly, H Kim

    2012-01-01

    Adenocarcinomas (ACs) and squamous cell carcinomas (SCCs) differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003) were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages) were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC). When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri), the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes). Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype being more strongly associated with ACs/SCCs histotype than with organ/site.

  8. Models for Rational Number Bases

    Science.gov (United States)

    Pedersen, Jean J.; Armbruster, Frank O.

    1975-01-01

    This article extends number bases to negative integers, then to positive rationals and finally to negative rationals. Methods and rules for operations in positive and negative rational bases greater than one or less than negative one are summarized in tables. Sample problems are explained and illustrated. (KM)

  9. Evaluation of location and number of aid post for sustainable humanitarian relief using agent based modeling (ABM) and geographic information system (GIS)

    Science.gov (United States)

    Khair, Fauzi; Sopha, Bertha Maya

    2017-12-01

    One of the crucial phases in disaster management is the response phase or the emergency response phase. It requires a sustainable system and a well-integrated management system. Any errors in the system on this phase will impact on significant increase of the victims number as well as material damage caused. Policies related to the location of aid posts are important decisions. The facts show that there are many failures in the process of providing assistance to the refugees due to lack of preparation and determination of facilities and aid post location. Therefore, this study aims to evaluate the number and location of aid posts on Merapi eruption in 2010. This study uses an integration between Agent Based Modeling (ABM) and Geographic Information System (GIS) about evaluation of the number and location of the aid post using some scenarios. The ABM approach aims to describe the agents behaviour (refugees and volunteers) in the event of a disaster with their respective characteristics. While the spatial data, GIS useful to describe real condition of the Sleman regency road. Based on the simulation result, it shows alternative scenarios that combine DERU UGM post, Maguwoharjo Stadium, Tagana Post and Pakem Main Post has better result in handling and distributing aid to evacuation barrack compared to initial scenario. Alternative scenarios indicates the unmet demands are less than the initial scenario.

  10. DIAGNOSTIC EVALUATION OF NUMBERICAL AIR QUALITY MODELS WITH SPECIALIZED AMBIENT OBSERVATIONS: TESTING THE COMMUNITY MULTISCALE AIR QUALITY MODELING SYSTEM (CMAQ) AT SELECTED SOS 95 GROUND SITES

    Science.gov (United States)

    Three probes for diagnosing photochemical dynamics are presented and applied to specialized ambient surface-level observations and to a numerical photochemical model to better understand rates of production and other process information in the atmosphere and in the model. Howeve...

  11. Reproduction numbers of infectious disease models

    Directory of Open Access Journals (Sweden)

    Pauline van den Driessche

    2017-08-01

    Full Text Available This primer article focuses on the basic reproduction number, ℛ0, for infectious diseases, and other reproduction numbers related to ℛ0 that are useful in guiding control strategies. Beginning with a simple population model, the concept is developed for a threshold value of ℛ0 determining whether or not the disease dies out. The next generation matrix method of calculating ℛ0 in a compartmental model is described and illustrated. To address control strategies, type and target reproduction numbers are defined, as well as sensitivity and elasticity indices. These theoretical ideas are then applied to models that are formulated for West Nile virus in birds (a vector-borne disease, cholera in humans (a disease with two transmission pathways, anthrax in animals (a disease that can be spread by dead carcasses and spores, and Zika in humans (spread by mosquitoes and sexual contacts. Some parameter values from literature data are used to illustrate the results. Finally, references for other ways to calculate ℛ0 are given. These are useful for more complicated models that, for example, take account of variations in environmental fluctuation or stochasticity. Keywords: Basic reproduction number, Disease control, West Nile virus, Cholera, Anthrax, Zika virus

  12. Stochastic modeling of sunshine number data

    Energy Technology Data Exchange (ETDEWEB)

    Brabec, Marek, E-mail: mbrabec@cs.cas.cz [Department of Nonlinear Modeling, Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodarenskou vezi 2, 182 07 Prague 8 (Czech Republic); Paulescu, Marius [Physics Department, West University of Timisoara, V. Parvan 4, 300223 Timisoara (Romania); Badescu, Viorel [Candida Oancea Institute, Polytechnic University of Bucharest, Spl. Independentei 313, 060042 Bucharest (Romania)

    2013-11-13

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  13. Stochastic modeling of sunshine number data

    Science.gov (United States)

    Brabec, Marek; Paulescu, Marius; Badescu, Viorel

    2013-11-01

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  14. Stochastic modeling of sunshine number data

    International Nuclear Information System (INIS)

    Brabec, Marek; Paulescu, Marius; Badescu, Viorel

    2013-01-01

    In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar

  15. Evaluation models and evaluation use

    Science.gov (United States)

    Contandriopoulos, Damien; Brousselle, Astrid

    2012-01-01

    The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460

  16. Nowcasting sunshine number using logistic modeling

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Badescu, V.; Paulescu, M.

    2013-01-01

    Roč. 120, č. 1-2 (2013), s. 61-71 ISSN 0177-7971 R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional research plan: CEZ:AV0Z1030915 Keywords : logistic regression * Markov model * sunshine number Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.245, year: 2013

  17. Development and experimental evaluation of models for low capillary number two-phase flows in rough walled fractures relevant to natural gradient conditions

    International Nuclear Information System (INIS)

    Glass, R.J.; Yarrington, L.; Nicholl, M.J.

    1997-09-01

    The major results from SNL's Conceptual Model Development and Validation Task (WBS 1.2.5.4.6) as developed through exploration of small scale processes were synthesized in Glass et al. to give guidance to Performance Assessment on improving conceptual models for isothermal flow in unsaturated, fractured rock. There, pressure saturation and relative permeability curves for single fractures were proposed to be a function of both fracture orientation within the gravity field and initial conditions. We refer the reader to Glass et al. for a discussion of the implications of this behavior for Performance Assessment. The scientific research we report here substantiates this proposed behavior. We address the modeling of phase structure within fractures under natural gradient conditions relevant to unsaturated flow through fractures. This phase structure underlies the calculation of effective properties for individual fractures and hence fracture networks as required for Performance Assessment. Standard Percolation (SP) and Invasion Percolation (IP) approaches have been recently proposed to model the underlying phase saturation structures within the individual fractures during conditions of two-phase flow. Subsequent analysis of these structures yields effective two-phase pressure-saturation and relative permeability relations for the fracture. However, both of these approaches yield structures that are at odds with physical reality as we see in experiments and thus effective properties calculated from these structures are in error. Here we develop and evaluate a Modified Invasion Percolation (MIP) approach to better model quasi-static immiscible displacement in fractures. The effects of gravity, contact angle, local aperature field geometry, and local in-plane interfacial curvature between phases are included in the calculation of invasion pressure for individual sites in a discretized aperture field

  18. Hurwitz numbers, matrix models and enumerative geometry

    CERN Document Server

    Bouchard, Vincent

    2007-01-01

    We propose a new, conjectural recursion solution for Hurwitz numbers at all genera. This conjecture is based on recent progress in solving type B topological string theory on the mirrors of toric Calabi-Yau manifolds, which we briefly review to provide some background for our conjecture. We show in particular how this B-model solution, combined with mirror symmetry for the one-leg, framed topological vertex, leads to a recursion relation for Hodge integrals with three Hodge class insertions. Our conjecture in Hurwitz theory follows from this recursion for the framed vertex in the limit of infinite framing.

  19. Modeling the number of car theft using Poisson regression

    Science.gov (United States)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  20. Estimation of curve number by DAWAST model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tai Cheol; Park, Seung Ki; Moon, Jong Pil [Chungnam National University, Taejon (Korea, Republic of)

    1997-10-31

    It is one of the most important factors to determine the effective rainfall for estimation of flood hydrograph in design schedule. SCS curve number (CN) method has been frequently used to estimate the effective rainfall of synthesized design flood hydrograph for hydraulic structures. But, it should be cautious to apply SCS-CN originally developed in U.S.A to watersheds in Korea, because characteristics of watersheds in Korea and cropping patterns especially like a paddy land cultivation are quite different from those in USA. New CN method has been introduced. Maximum storage capacity which was herein defined as U{sub max} can be calibrated from the stream flow data and converted to new CN-I of driest condition of soil moisture in the given watershed. Effective rainfall for design flood hydrograph can be estimated by the curve number developed in the watersheds in Korea. (author). 14 refs., 5 tabs., 3 figs.

  1. The IIR evaluation model

    DEFF Research Database (Denmark)

    Borlund, Pia

    2003-01-01

    An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation ...

  2. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  3. Cosmic numbers: A physical classification for cosmological models

    International Nuclear Information System (INIS)

    Avelino, P.P.; Martins, C.J.A.P.

    2003-01-01

    We introduce the notion of the cosmic numbers of a cosmological model, and discuss how they can be used to naturally classify models according to their ability to solve some of the problems of the standard cosmological model

  4. The EMEFS model evaluation

    International Nuclear Information System (INIS)

    Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs

  5. The EMEFS model evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. (Pacific Northwest Lab., Richland, WA (United States)); Dennis, R.L. (Environmental Protection Agency, Research Triangle Park, NC (United States)); Seilkop, S.K. (Analytical Sciences, Inc., Durham, NC (United States)); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. (Atmospheric Environment Service, Downsview, ON (Canada)); Byun, D.; McHenry, J.N.

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  6. Vegetable oil and fat viscosity forecast models based on iodine number and saponification number

    International Nuclear Information System (INIS)

    Toscano, G.; Riva, G.; Foppa Pedretti, E.; Duca, D.

    2012-01-01

    Vegetable oil and fats can be considered as an important renewable source for the energy production. There are many applications where these biofuels are used directly in engines. However, the use of pure vegetable oils causes some problems as consequence of its chemical and physical characteristic. Viscosity is one of the most important parameters affecting several physical and mechanical processes of the operation of the engine. The determination of this parameter at different tis important to determine the behavior of the vegetable oil and fats. In this work we investigated the effects of two analytical chemical parameters (iodine number and saponification number) and forecasting models have been proposed. -- Highlights: ► Vegetable oil and fat viscosity is predicted by mathematical model based on saponification number and iodine number. ► Unsaturated vegetable oils with small size molecules of fatty acids have a lower viscosity values. ► The models proposed show an average error lower than 12%

  7. Introducing Program Evaluation Models

    Directory of Open Access Journals (Sweden)

    Raluca GÂRBOAN

    2008-02-01

    Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.

  8. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  9. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  10. Data modeling and evaluation

    International Nuclear Information System (INIS)

    Bauge, E.; Hilaire, S.

    2006-01-01

    This lecture is devoted to the nuclear data evaluation process, during which the current knowledge (experimental or theoretical) of nuclear reactions is condensed and synthesised into a computer file (the evaluated data file) that application codes can process and use for simulation calculations. After an overview of the content of evaluated nuclear data files, we describe the different methods used for evaluating nuclear data. We specifically focus on the model based approach which we use to evaluate data in the continuum region. A few examples, coming from the day to day practice of data evaluation will illustrate this lecture. Finally, we will discuss the most likely perspectives for improvement of the evaluation process in the next decade. (author)

  11. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  12. Evaluation of phase separator number in hydrodesulfurization (HDS) unit

    Science.gov (United States)

    Jayanti, A. D.; Indarto, A.

    2016-11-01

    The removal process of acid gases such as H2S in natural gas processing industry is required in order to meet sales gas specification. Hydrodesulfurization (HDS)is one of the processes in the refinery that is dedicated to reduce sulphur.InHDS unit, phase separator plays important role to remove H2S from hydrocarbons, operated at a certain pressure and temperature. Optimization of the number of separator performed on the system is then evaluated to understand the performance and economics. From the evaluation, it shows that all systems were able to meet the specifications of H2S in the desired product. However, one separator system resulted the highest capital and operational costs. The process of H2S removal with two separator systems showed the best performance in terms of both energy efficiency with the lowest capital and operating cost. The two separator system is then recommended as a reference in the HDS unit to process the removal of H2S from natural gas.

  13. Integrated Assessment Model Evaluation

    Science.gov (United States)

    Smith, S. J.; Clarke, L.; Edmonds, J. A.; Weyant, J. P.

    2012-12-01

    Integrated assessment models of climate change (IAMs) are widely used to provide insights into the dynamics of the coupled human and socio-economic system, including emission mitigation analysis and the generation of future emission scenarios. Similar to the climate modeling community, the integrated assessment community has a two decade history of model inter-comparison, which has served as one of the primary venues for model evaluation and confirmation. While analysis of historical trends in the socio-economic system has long played a key role in diagnostics of future scenarios from IAMs, formal hindcast experiments are just now being contemplated as evaluation exercises. Some initial thoughts on setting up such IAM evaluation experiments are discussed. Socio-economic systems do not follow strict physical laws, which means that evaluation needs to take place in a context, unlike that of physical system models, in which there are few fixed, unchanging relationships. Of course strict validation of even earth system models is not possible (Oreskes etal 2004), a fact borne out by the inability of models to constrain the climate sensitivity. Energy-system models have also been grappling with some of the same questions over the last quarter century. For example, one of "the many questions in the energy field that are waiting for answers in the next 20 years" identified by Hans Landsberg in 1985 was "Will the price of oil resume its upward movement?" Of course we are still asking this question today. While, arguably, even fewer constraints apply to socio-economic systems, numerous historical trends and patterns have been identified, although often only in broad terms, that are used to guide the development of model components, parameter ranges, and scenario assumptions. IAM evaluation exercises are expected to provide useful information for interpreting model results and improving model behavior. A key step is the recognition of model boundaries, that is, what is inside

  14. The Influence of Investor Number on a Microscopic Market Model

    Science.gov (United States)

    Hellthaler, T.

    The stock market model of Levy, Persky, Solomon is simulated for much larger numbers of investors. While small markets can lead to realistically looking prices, the resulting prices of large markets oscillate smoothly in a semi-regular fashion.

  15. Training effectiveness evaluation model

    International Nuclear Information System (INIS)

    Penrose, J.B.

    1993-01-01

    NAESCO's Training Effectiveness Evaluation Model (TEEM) integrates existing evaluation procedures with new procedures. The new procedures are designed to measure training impact on organizational productivity. TEEM seeks to enhance organizational productivity through proactive training focused on operation results. These results can be identified and measured by establishing and tracking performance indicators. Relating training to organizational productivity is not easy. TEEM is a team process. It offers strategies to assess more effectively organizational costs and benefits of training. TEEM is one organization's attempt to refine, manage and extend its training evaluation program

  16. Developmental Education Evaluation Model.

    Science.gov (United States)

    Perry-Miller, Mitzi; And Others

    A developmental education evaluation model designed to be used at a multi-unit urban community college is described. The purpose of the design was to determine the cost effectiveness/worth of programs in order to initiate self-improvement. A needs assessment was conducted by interviewing and taping the responses of students, faculty, staff, and…

  17. CMAQ Model Evaluation Framework

    Science.gov (United States)

    CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.

  18. The Comprehensive Evaluation Method of Supervision Risk in Electricity Transaction Based on Unascertained Rational Number

    Science.gov (United States)

    Haining, Wang; Lei, Wang; Qian, Zhang; Zongqiang, Zheng; Hongyu, Zhou; Chuncheng, Gao

    2018-03-01

    For the uncertain problems in the comprehensive evaluation of supervision risk in electricity transaction, this paper uses the unidentified rational numbers to evaluation the supervision risk, to obtain the possible result and corresponding credibility of evaluation and realize the quantification of risk indexes. The model can draw the risk degree of various indexes, which makes it easier for the electricity transaction supervisors to identify the transaction risk and determine the risk level, assisting the decision-making and realizing the effective supervision of the risk. The results of the case analysis verify the effectiveness of the model.

  19. Mayer–Jensen Shell Model and Magic Numbers

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 12. Mayer-Jensen Shell Model and Magic Numbers - An Independent Nucleon Model with Spin-Orbit Coupling. R Velusamy. General Article Volume 12 Issue 12 December 2007 pp 12-24 ...

  20. Optimal Number of States in Hidden Markov Models and its ...

    African Journals Online (AJOL)

    In this paper, Hidden Markov Model is applied to model human movements as to facilitate an automatic detection of the same. A number of activities were simulated with the help of two persons. The four movements considered are walking, sitting down-getting up, fall while walking and fall while standing. The data is ...

  1. On the vacuum baryon number in the chiral bag model

    International Nuclear Information System (INIS)

    Jaroszewicz, T.

    1984-01-01

    We give a rederivation, generalization and interpretation of the result of Goldstone and Jaffe on the vacuum baryon number in the chiral bag model. Our results are based on considering the bag model as a theory of free quarks, massless inside and infinitely massive outside the bag. (orig.)

  2. Evaluating Number Sense in Community College Developmental Math Students

    Science.gov (United States)

    Steinke, Dorothea A.

    2017-01-01

    Community college developmental math students (N = 657) from three math levels were asked to place five whole numbers on a line that had only endpoints 0 and 20 marked. How the students placed the numbers revealed the same three stages of behavior that Steffe and Cobb (1988) documented in determining young children's number sense. 23% of the…

  3. Evaluating Educational Programs. ERIC Digest Series Number EA 54.

    Science.gov (United States)

    Beswick, Richard

    In this digest, readers are introduced to the scope of instructional program evaluation and evaluators' changing roles in school districts. A program evaluation measures outcomes based on student-attainment goals, implementation levels, and external factors such as budgetary restraints and community support. Instructional program evaluation may be…

  4. Evaluate Yourself. Evaluation: Research-Based Decision Making Series, Number 9304.

    Science.gov (United States)

    Fetterman, David M.

    This document considers both self-examination and external evaluation of gifted and talented education programs. Principles of the self-examination process are offered, noting similarities to external evaluation models. Principles of self-evaluation efforts include the importance of maintaining a nonjudgmental orientation, soliciting views from…

  5. Conserved number fluctuations in a hadron resonance gas model

    International Nuclear Information System (INIS)

    Garg, P.; Mishra, D.K.; Netrakanti, P.K.; Mohanty, B.; Mohanty, A.K.; Singh, B.K.; Xu, N.

    2013-01-01

    Net-baryon, net-charge and net-strangeness number fluctuations in high energy heavy-ion collisions are discussed within the framework of a hadron resonance gas (HRG) model. Ratios of the conserved number susceptibilities calculated in HRG are being compared to the corresponding experimental measurements to extract information about the freeze-out condition and the phase structure of systems with strong interactions. We emphasize the importance of considering the actual experimental acceptances in terms of kinematics (pseudorapidity (η) and transverse momentum (p T )), the detected charge state, effect of collective motion of particles in the system and the resonance decay contributions before comparisons are made to the theoretical calculations. In this work, based on HRG model, we report that the net-baryon number fluctuations are least affected by experimental acceptances compared to the net-charge and net-strangeness number fluctuations

  6. On the Reproduction Number of a Gut Microbiota Model.

    Science.gov (United States)

    Barril, Carles; Calsina, Àngel; Ripoll, Jordi

    2017-11-01

    A spatially structured linear model of the growth of intestinal bacteria is analysed from two generational viewpoints. Firstly, the basic reproduction number associated with the bacterial population, i.e. the expected number of daughter cells per bacterium, is given explicitly in terms of biological parameters. Secondly, an alternative quantity is introduced based on the number of bacteria produced within the intestine by one bacterium originally in the external media. The latter depends on the parameters in a simpler way and provides more biological insight than the standard reproduction number, allowing the design of experimental procedures. Both quantities coincide and are equal to one at the extinction threshold, below which the bacterial population becomes extinct. Optimal values of both reproduction numbers are derived assuming parameter trade-offs.

  7. Toward a model framework of generalized parallel componential processing of multi-symbol numbers.

    Science.gov (United States)

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-05-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).

  8. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  9. Do calculated conflicts in microsimulation model predict number of crashes?

    NARCIS (Netherlands)

    Dijkstra, Atze; Marchesini, Paula; Bijleveld, Frits; Kars, Vincent; Drolenga, Hans; Maarseveen, Martin Van

    2010-01-01

    A microsimulation model and its calculations are described, and the results that are subsequently used to determine indicators for traffic safety are presented. The method demonstrates which changes occur at the level of traffic flow (number of vehicles per section of road) and at the vehicle level

  10. Physical and numerical modelling of low mach number compressible flows

    International Nuclear Information System (INIS)

    Paillerre, H.; Clerc, S.; Dabbene, F.; Cueto, O.

    1999-01-01

    This article reviews various physical models that may be used to describe compressible flow at low Mach numbers, as well as the numerical methods developed at DRN to discretize the different systems of equations. A selection of thermal-hydraulic applications illustrate the need to take into account compressibility and multidimensional effects as well as variable flow properties. (authors)

  11. Evaluation: The TADS Experience. Occasional Paper Number 4.

    Science.gov (United States)

    Suarez, Tanya M.; Vandiviere, Patricia

    The paper considers the issues, decisions, and practices involved in evaluating the Technical Assistance Development System (TADS), a project to provide assistance to demonstration projects and start education agency grantees in the Handicapped Children's Early Education Program. Section 1 considers the focus for the evaluation in terms of its…

  12. Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers

    Science.gov (United States)

    Xiao, X.; Hassan, H. A.; Baurle, R. A.

    2006-01-01

    A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.

  13. Baryon number fluctuations in quasi-particle model

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ameng [Southeast University Chengxian College, Department of Foundation, Nanjing (China); Luo, Xiaofeng [Central China Normal University, Key Laboratory of Quark and Lepton Physics (MOE), Institute of Particle Physics, Wuhan (China); Zong, Hongshi [Nanjing University, Department of Physics, Nanjing (China); Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing (China); Institute of Theoretical Physics, CAS, State Key Laboratory of Theoretical Physics, Beijing (China)

    2017-04-15

    Baryon number fluctuations are sensitive to the QCD phase transition and the QCD critical point. According to the Feynman rules of finite-temperature field theory, we calculated various order moments and cumulants of the baryon number distributions in the quasi-particle model of the quark-gluon plasma. Furthermore, we compared our results with the experimental data measured by the STAR experiment at RHIC. It is found that the experimental data can be well described by the model for the colliding energies above 30 GeV and show large discrepancies at low energies. This puts a new constraint on the qQGP model and also provides a baseline for the QCD critical point search in heavy-ion collisions at low energies. (orig.)

  14. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  15. Evaluation of R and D volume 2 number 3

    International Nuclear Information System (INIS)

    Anderson, F.; Cheah, C.; Dalpe, R.; O'Brecht, M.

    1994-01-01

    A Canadian newsletter on the evaluation of research and development. This issue contains an econometric assessment of the impact of Research and Development programs, the choosing of the location of pharmaceutical Research and Development, the industry's scientific publications, the standards as a strategic instrument, and how much future Research and Development can an organization justify

  16. Statistical evaluation of PACSTAT random number generation capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.F.; Toland, M.R.; Harty, H.; Budden, M.J.; Bartley, C.L.

    1988-05-01

    This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT were implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.

  17. SMART Grid Evaluation Using Fuzzy Numbers and TOPSIS

    Science.gov (United States)

    El Alaoui, Mohammed

    2018-05-01

    In recent advent of smart grids, the end-users aims to satisfy simultaneously low electricity bills, with a reasonable level of comfort. While cost evaluation appears to be an easy task, capturing human preferences seems to be more challenging. Here we propose the use of fuzzy logic and a modified version of the TOPSIS method, to quantify end-users’ preferences in a smart grid. While classical smart grid focus only on the technological side, it is proven that smart grid effectiveness is hugely linked to end-users’ behaviours. The main objective here, is to involve smart grid users in order to get maximum satisfaction, preserving classical smart grid objectives.

  18. Fuzzy model for predicting the number of deformed wheels

    Directory of Open Access Journals (Sweden)

    Ž. Đorđević

    2015-10-01

    Full Text Available Deformation of the wheels damage cars and rails and affect on vehicle stability and safety. Repair and replacement cause high costs and lack of wagons. Planning of maintenance of wagons can not be done without estimates of the number of wheels that will be replaced due to wear and deformation in a given period of time. There are many influencing factors, the most important are: weather conditions, quality of materials, operating conditions, and distance between the two replacements. The fuzzy logic model uses the collected data as input variables to predict the output variable - number of deformed wheels for a certain type of vehicle in the defined period at a particular section of the railway.

  19. Modeling users' activity on twitter networks: validation of Dunbar's number.

    Directory of Open Access Journals (Sweden)

    Bruno Gonçalves

    Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  20. Baryon number dissipation at finite temperature in the standard model

    International Nuclear Information System (INIS)

    Mottola, E.; Raby, S.; Starkman, G.

    1990-01-01

    We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, γ is given in terms of real time correlation functions of the operator E·B, and is directly proportional to the sphaleron transition rate, Γ: γ preceq n f Γ/T 3 . Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs

  1. Evaluating topic models with stability

    CSIR Research Space (South Africa)

    De Waal, A

    2008-11-01

    Full Text Available Topic models are unsupervised techniques that extract likely topics from text corpora, by creating probabilistic word-topic and topic-document associations. Evaluation of topic models is a challenge because (a) topic models are often employed...

  2. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  3. Increased mast cell numbers in a calcaneal tendon overuse model

    DEFF Research Database (Denmark)

    Pingel, Jessica; Wienecke, Jacob; Kongsgaard Madsen, Mads

    2013-01-01

    Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar rats...... = 0.03; 2.75 ± 0.54 vs 1.17 ± 0.53, was increased in the runners. The Bonar score (P = 0.05), and the number of mast cells (P = 0.02) were significantly higher in the runners compared to the controls. Furthermore, SHGM showed focal collagen disorganization in the runners, and reduced collagen density...... (P = 0.03). IL-3 mRNA levels were correlated with mast cell number in sedentary animals. The qPCR analysis showed no significant differences between the groups in the other analyzed targets. The current study demonstrates that 7-week HIUR causes structural changes in the calcaneal tendon, and further...

  4. Modelling the number of olive groves in Spanish municipalities

    Energy Technology Data Exchange (ETDEWEB)

    Huete, M.D.; Marmolejo, J.A.

    2016-11-01

    The univariate generalized Waring distribution (UGWD) is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively). UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability. (Author)

  5. Modelling the number of olive groves in Spanish municipalities

    Directory of Open Access Journals (Sweden)

    María-Dolores Huete

    2016-03-01

    Full Text Available The univariate generalized Waring distribution (UGWD is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively. UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability.

  6. Number and location of drainage catheter side holes: in vitro evaluation.

    Science.gov (United States)

    Ballard, D H; Alexander, J S; Weisman, J A; Orchard, M A; Williams, J T; D'Agostino, H B

    2015-09-01

    To evaluate the influence of number and location of catheter shaft side holes regarding drainage efficiency in an in vitro model. Three different drainage catheter models were constructed: open-ended model with no side holes (one catheter), unilateral side hole model (six catheters with one to six unilateral side holes), and bilateral side hole model (six catheters with one to six bilateral side holes). Catheters were inserted into a drainage output-measuring device with a constant-pressure reservoir of water. The volume of water evacuated by each of the catheters at 10-second intervals was measured. A total of five trials were performed for each catheter. Data were analysed using one-way analysis of variance. The open-ended catheter had a mean drainage volume comparable to the unilateral model catheters with three, four, and five side holes. Unilateral model catheters had significant drainage volume increases up to three side holes; unilateral model catheters with more than three side holes had no significant improvement in drainage volume. All bilateral model catheters had significantly higher mean drainage volumes than their unilateral counterparts. There was no significant difference between the mean drainage volume with one, two, or three pairs of bilateral side holes. Further, there was no drainage improvement by adding additional bilateral side holes. The present in vitro study suggests that beyond a critical side hole number threshold, adding more distal side holes does not improve catheter drainage efficiency. These results may be used to enhance catheter design towards improving their drainage efficiency. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. Increased mast cell numbers in a calcaneal tendon overuse model.

    Science.gov (United States)

    Pingel, J; Wienecke, J; Kongsgaard, M; Behzad, H; Abraham, T; Langberg, H; Scott, A

    2013-12-01

    Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar rats were divided in two groups: running group (n = 12); sedentary control group (n = 12). The running-group was exposed to the HIUR exercise protocol for 7 weeks. The calcaneal tendons of both hind limbs were dissected. The right tendon was used for histologic analysis using Bonar score, immunohistochemistry, and second harmonic generation microscopy (SHGM). The left tendon was used for quantitative polymerase chain reaction (qPCR) analysis. An increased tendon cell density in the runners were observed compared to the controls (P = 0.05). Further, the intensity of immunostaining of protein kinase B, P = 0.03; 2.75 ± 0.54 vs 1.17 ± 0.53, was increased in the runners. The Bonar score (P = 0.05), and the number of mast cells (P = 0.02) were significantly higher in the runners compared to the controls. Furthermore, SHGM showed focal collagen disorganization in the runners, and reduced collagen density (P = 0.03). IL-3 mRNA levels were correlated with mast cell number in sedentary animals. The qPCR analysis showed no significant differences between the groups in the other analyzed targets. The current study demonstrates that 7-week HIUR causes structural changes in the calcaneal tendon, and further that these changes are associated with an increased mast cell density. © 2013 The Authors. Scand J Med Sci Sports published by John Wiley & Sons Ltd.

  8. Evaluation of new expressions for the X-ray characteristic distribution function φ(ρz) and its application to the correction models method by atomic number, absorption and fluorescence (ZAF)

    International Nuclear Information System (INIS)

    Castellano, G.; Trincavelli, J.; Del Giorgio, M.; Riveros, J.

    1987-01-01

    Recent models for the distribution function given by Sewell, Love and Scott (1985) and by Pouchou and Pichoir (1986) are compared with those models which have shown a good agreement with experimental data. The validity of the basis on which the different models have been developed is discussed. (Author) [es

  9. Investigation on the applicability of turbulent-Prandtl-number models for liquid lead-bismuth eutectic

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Fei, E-mail: chenfei@iet.cn [Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing 100190 (China); North China University of Water Resources and Electric Power, Zhengzhou, Henan 450011 (China); Huai, Xiulan, E-mail: hxl@iet.cn [Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing 100190 (China); Cai, Jun, E-mail: caijun@iet.cn [Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing 100190 (China); Li, Xunfeng, E-mail: lixunfeng@iet.cn [Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing 100190 (China); Meng, Ruixue, E-mail: mengruixue@iet.cn [Institute of Engineering Thermophysics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-04-15

    Highlights: ► We examine the applicability of various Pr{sub t} models into the simulation of LBE flow. ► Reynolds analogy suitable for conventional fluids cannot accurately simulate the heat transfer characteristics of LBE flow. ► The different Pr{sub t} model should be selected for the different thermal boundary condition of LBE flow. -- Abstract: With the proposal of Accelerator Driven Sub-critical System (ADS) together with liquid lead-bismuth eutectic (LBE) as coolant for both reactor and spallation target, the use of accurate heat transfer correlation and reliable turbulent-Prandtl-number model of LBE in turbulent flows is essential when designing ADS components of primary loop and heat exchanger of secondary loop. Unlike conventional fluids, there is not an acknowledged turbulent-Prandtl-number model for LBE flows. This paper reviews and assesses the existing turbulent-Pandtl-number models and various heat transfer correlations in circular tubes. Computational fluid dynamics (CFD) analysis is employed to evaluate the applicability of various turbulent-Prandtl-number models for LBE in the circular tube under boundary conditions of constant heat flux and constant wall temperature. Based on the assessment of turbulent-Prandtl-number models, the reliable turbulent-Prandtl-number models are recommended for CFD applications to LBE flows under boundary conditions of constant heat flux and constant wall temperature. The present study indicates that turbulent Prandtl number has a significant difference in turbulent LBE flow between constant-heat-flux and constant-wall-temperature boundary conditions.

  10. Nuclear models relevant to evaluation

    International Nuclear Information System (INIS)

    Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.

    1991-01-01

    The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs

  11. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    Science.gov (United States)

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  12. Evaluation of cell number and DNA content in mouse embryos cultivated with uranium

    International Nuclear Information System (INIS)

    Kundt, Mirian S.; Cabrini, Romulo L.

    2000-01-01

    The evaluation of the degree of development, the number of cells and the DNA content, were used to evaluate the embryotoxicity of uranium. Embryos at a one cell stage were cultured with uranyl nitrate hexahydrate (UN) at a final concentration of uranium (U) of 26, 52 and 104 μgU/ml. At 24 hs of culture, the embryos at the 2 cell stage, were put in new wells with the same concentrations of U as the previous day, until the end of the period of incubation at 72 hs. At 72 hs of culture, 87% of the original one cell embryos were at morula stage, and in those cultivated with uranium, the percentage decreased significantly to 77; 63.24 and 40.79% respectively for the different U concentrations. Those embryos that exhibited a normal morphology, were selected and fixed on slides. The number of cells per embryo was evaluated in Giemsa stained preparations. The DNA content was evaluated cytophotometrically in Feulgen stained nuclei. The number of cells decreased significantly from 20,3 ± 5.6 in the control to 19 ± 6; 14 ± 3 and 13.9 ± 5.6 for the different concentrations. All the embryos evaluated showed one easy recognizable polar body, which was used a haploid indicator (n). The content of DNA was measured in a total of 20 control embryos and 16 embryos cultivated with UN. In control embryos, 92,7% of the nuclei presented a normal ploidy from 2n to 4n, 2,9% nuclei were hypoploid and 4,4% were hyperploid. The percentage of hypoploid nuclei rose in a dose-dependent fashion to 3.45; 44.45 and 50.34% respectively for the embryos cultured at the different U concentrations. The results indicate that U is embryotoxic, that its effects are dose dependent at the concentrations used in this study and that even those embryos that show a normal morphology, can be genetically affected. We show that the model employed is extremely sensitive. It is possible to use the preimplantation embryos, as a model to test the effect of possibly mutagenic agents of the nuclear industry. (author)

  13. Ericksen number and Deborah number cascade predictions of a model for liquid crystalline polymers for simple shear flow

    Science.gov (United States)

    Klein, D. Harley; Leal, L. Gary; García-Cervera, Carlos J.; Ceniceros, Hector D.

    2007-02-01

    We consider the behavior of the Doi-Marrucci-Greco (DMG) model for nematic liquid crystalline polymers in planar shear flow. We found the DMG model to exhibit dynamics in both qualitative and quantitative agreement with experimental observations reported by Larson and Mead [Liq. Cryst. 15, 151 (1993)] for the Ericksen number and Deborah number cascades. For increasing shear rates within the Ericksen number cascade, the DMG model displays three distinct regimes: stable simple shear, stable roll cells, and irregular structure accompanied by disclination formation. In accordance with experimental observations, the model predicts both ±1 and ±1/2 disclinations. Although ±1 defects form via the ridge-splitting mechanism first identified by Feng, Tao, and Leal [J. Fluid Mech. 449, 179 (2001)], a new mechanism is identified for the formation of ±1/2 defects. Within the Deborah number cascade, with increasing Deborah number, the DMG model exhibits a streamwise banded texture, in the absence of disclinations and roll cells, followed by a monodomain wherein the mean orientation lies within the shear plane throughout the domain.

  14. Rock mechanics models evaluation report

    International Nuclear Information System (INIS)

    1987-08-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs

  15. Solar energy market penetration models - Science or number mysticism

    Science.gov (United States)

    Warren, E. H., Jr.

    1980-01-01

    The forecast market potential of a solar technology is an important factor determining its R&D funding. Since solar energy market penetration models are the method used to forecast market potential, they have a pivotal role in a solar technology's development. This paper critiques the applicability of the most common solar energy market penetration models. It is argued that the assumptions underlying the foundations of rigorously developed models, or the absence of a reasonable foundation for the remaining models, restrict their applicability.

  16. An EPQ model with imperfect items using interval grey numbers

    Directory of Open Access Journals (Sweden)

    Erdal Aydemir

    2015-01-01

    Full Text Available The classic economic production quantity (EPQ model has been widely used to determine the optimal production quantity. However, the analysis for finding an EPQ model has many weaknesses which lead many researchers and practitioners to make extensions in several aspects on the original EPQ model. The basic assumption of EPQ model is that 100% of manufactured products are non-defective that is not valid for many production processes generally. The purpose of this paper is to develop an EPQ model with grey demand rate and cost values with maximum backorder level allowed with the good quality items in units under an imperfect production process. The imperfect items are considered to be low quality items which are sold to a particular purchaser at a lower price and, the others are reworked and scrapped. A mathematical model is developed and then an industrial example is presented on the wooden chipboard production process for illustration of the proposed model.

  17. Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes

    Science.gov (United States)

    Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin

    2018-06-01

    In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.

  18. The EU model evaluation group

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1999-01-01

    The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  19. Modelling of high-enthalpy, high-Mach number flows

    International Nuclear Information System (INIS)

    Degrez, G; Lani, A; Panesi, M; Chazot, O; Deconinck, H

    2009-01-01

    A review is made of the computational models of high-enthalpy flows developed over the past few years at the von Karman Institute and Universite Libre de Bruxelles, for the modelling of high-enthalpy hypersonic (re-)entry flows. Both flows in local thermo-chemical equilibrium (LTE) and flows in thermo-chemical non-equilibrium (TCNEQ) are considered. First, the physico-chemical models are described, i.e. the set of conservation laws, the thermodynamics, transport phenomena and chemical kinetics models. Particular attention is given to the correct modelling of elemental (LTE flows) and species (chemical non-equilibrium-CNEQ-flows) transport. The numerical algorithm, based on a state-of-the-art finite volume discretization, is then briefly described. Finally, selected examples are included to illustrate the capabilities of the developed solver. (review article)

  20. Mobility Models for Systems Evaluation

    Science.gov (United States)

    Musolesi, Mirco; Mascolo, Cecilia

    Mobility models are used to simulate and evaluate the performance of mobile wireless systems and the algorithms and protocols at the basis of them. The definition of realistic mobility models is one of the most critical and, at the same time, difficult aspects of the simulation of applications and systems designed for mobile environments. There are essentially two possible types of mobility patterns that can be used to evaluate mobile network protocols and algorithms by means of simulations: traces and synthetic models [130]. Traces are obtained by means of measurements of deployed systems and usually consist of logs of connectivity or location information, whereas synthetic models are mathematical models, such as sets of equations, which try to capture the movement of the devices.

  1. Study and discretization of kinetic models and fluid models at low Mach number

    International Nuclear Information System (INIS)

    Dellacherie, Stephane

    2011-01-01

    This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified

  2. A pollution fate and transport model application in a semi-arid region: Is some number better than no number?

    Science.gov (United States)

    Özcan, Zeynep; Başkan, Oğuz; Düzgün, H Şebnem; Kentel, Elçin; Alp, Emre

    2017-10-01

    Fate and transport models are powerful tools that aid authorities in making unbiased decisions for developing sustainable management strategies. Application of pollution fate and transport models in semi-arid regions has been challenging because of unique hydrological characteristics and limited data availability. Significant temporal and spatial variability in rainfall events, complex interactions between soil, vegetation and topography, and limited water quality and hydrological data due to insufficient monitoring network make it a difficult task to develop reliable models in semi-arid regions. The performances of these models govern the final use of the outcomes such as policy implementation, screening, economical analysis, etc. In this study, a deterministic distributed fate and transport model, SWAT, is applied in Lake Mogan Watershed, a semi-arid region dominated by dry agricultural practices, to estimate nutrient loads and to develop the water budget of the watershed. To minimize the discrepancy due to limited availability of historical water quality data extensive efforts were placed in collecting site-specific data for model inputs such as soil properties, agricultural practice information and land use. Moreover, calibration parameter ranges suggested in the literature are utilized during calibration in order to obtain more realistic representation of Lake Mogan Watershed in the model. Model performance is evaluated using comparisons of the measured data with 95%CI for the simulated data and comparison of unit pollution load estimations with those provided in the literature for similar catchments, in addition to commonly used evaluation criteria such as Nash-Sutcliffe simulation efficiency, coefficient of determination and percent bias. These evaluations demonstrated that even though the model prediction power is not high according to the commonly used model performance criteria, the calibrated model may provide useful information in the comparison of the

  3. The Baryon Number Two System in the Chiral Soliton Model

    International Nuclear Information System (INIS)

    Mantovani-Sarti, V.; Drago, A.; Vento, V.; Park, B.-Y.

    2013-01-01

    We study the interaction between two B = 1 states in a chiral soliton model where baryons are described as non-topological solitons. By using the hedgehog solution for the B = 1 states we construct three possible B = 2 configurations to analyze the role of the relative orientation of the hedgehog quills in the dynamics. The strong dependence of the inter soliton interaction on these relative orientations reveals that studies of dense hadronic matter using this model should take into account their implications. (author)

  4. Modeling of dynamically loaded hydrodynamic bearings at low Sommerfeld numbers

    DEFF Research Database (Denmark)

    Thomsen, Kim

    Current state of the art within the wind industry dictates the use of conventional rolling element bearings for main bearings. As wind turbine generators increase in size and output, so does the size of the main bearings and accordingly also the cost and potential risk of failure modes. The cost...... and failure risk of rolling element bearings do, however, grow exponentially with the size. Therefore hydrodynamic bearings can prove to be a competitive alternative to the current practice of rolling element bearings and ultimately help reducing the cost and carbon footprint of renewable energy generation....... The challenging main bearing operation conditions in a wind turbine pose a demanding development task for the design of a hydrodynamic bearing. In general these conditions include operation at low Reynolds numbers with frequent start and stop at high loads as well as difficult operating conditions dictated...

  5. Flow through collapsible tubes at low Reynolds numbers. Applicability of the waterfall model.

    Science.gov (United States)

    Lyon, C K; Scott, J B; Wang, C Y

    1980-07-01

    The applicability of the waterfall model was tested using the Starling resistor and different viscosities of fluids to vary the Reynolds number. The waterfall model proved adequate to describe flow in the Starling resistor model only at very low Reynolds numbers (Reynolds number less than 1). Blood flow characterized by such low Reynolds numbers occurs only in the microvasculature. Thus, it is inappropriate to apply the waterfall model indiscriminately to flow through large collapsible veins.

  6. Minimum number and best combinations of harvests to evaluate accessions of tomato plants from germplasm banks

    Directory of Open Access Journals (Sweden)

    Flávia Barbosa Abreu

    2006-01-01

    Full Text Available This study presents the minimum number and the best combination of tomato harvests needed to compare tomato accessions from germplasm banks. Number and weight of fruit in tomato plants are important as auxiliary traits in the evaluation of germplasm banks and should be studied simultaneously with other desirable characteristics such as pest and disease resistance, improved flavor and early production. Brazilian tomato breeding programs should consider not only the number of fruit but also fruit size because Brazilian consumers value fruit that are homogeneous, large and heavy. Our experiment was a randomized block design with three replicates of 32 tomato accessions from the Vegetable Germplasm Bank (Banco de Germoplasma de Hortaliças at the Federal University of Viçosa, Minas Gerais, Brazil plus two control cultivars (Debora Plus and Santa Clara. Nine harvests were evaluated for four production-related traits. The results indicate that six successive harvests are sufficient to compare tomato genotypes and germplasm bank accessions. Evaluation of genotypes according to the number of fruit requires analysis from the second to the seventh harvest. Evaluation of fruit weight by genotype requires analysis from the fourth to the ninth harvest. Evaluation of both number and weight of fruit require analysis from the second to the ninth harvest.

  7. Evaluation Methodology. The Evaluation Exchange. Volume 11, Number 2, Summer 2005

    Science.gov (United States)

    Coffman, Julia, Ed.

    2005-01-01

    This is the third issue of "The Evaluation Exchange" devoted entirely to the theme of methodology, though every issue tries to identify new methodological choices, the instructive ways in which people have applied or combined different methods, and emerging methodological trends. For example, lately "theories of change" have gained almost…

  8. Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM

    Directory of Open Access Journals (Sweden)

    Łapczyński Mariusz

    2014-08-01

    Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.

  9. Evaluation of bispectrum in the wave number domain based on multi-point measurements

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2008-10-01

    Full Text Available We present an estimator of the bispectrum, a measure of three-wave couplings. It is evaluated directly in the wave number domain using a limited number of detectors. The ability of the bispectrum estimator is examined numerically and then it is applied to fluctuations of magnetic field and electron density in the terrestrial foreshock region observed by the four Cluster spacecraft, which indicates the presence of a three-wave coupling in space plasma.

  10. Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.

    Science.gov (United States)

    Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian

    2015-10-01

    Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 mm) as well as string length congruity (congruent: 1 m_2 km with m 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.

  11. Recommendations and illustrations for the evaluation of photonic random number generators

    Science.gov (United States)

    Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi

    2017-09-01

    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  12. Recommendations and illustrations for the evaluation of photonic random number generators

    Directory of Open Access Journals (Sweden)

    Joseph D. Hart

    2017-09-01

    Full Text Available The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h(,τ as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.

  13. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  14. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  15. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies: Evaluation Number 18

    Science.gov (United States)

    Burkholder, J. B.; Sander, S. P.; Abbatt, J. P. D.; Barker, J. R.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Orkin, V. L.; Wilmouth, D. M.; Wine, P. H.

    2015-01-01

    This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. The evaluation is available in electronic form from the following Internet URL: http://jpldataeval.jpl.nasa.gov/

  16. Two Ranking Methods of Single Valued Triangular Neutrosophic Numbers to Rank and Evaluate Information Systems Quality

    Directory of Open Access Journals (Sweden)

    Samah Ibrahim Abdel Aal

    2018-03-01

    Full Text Available The concept of neutrosophic can provide a generalization of fuzzy set and intuitionistic fuzzy set that make it is the best fit in representing indeterminacy and uncertainty. Single Valued Triangular Numbers (SVTrN-numbers is a special case of neutrosophic set that can handle ill-known quantity very difficult problems. This work intended to introduce a framework with two types of ranking methods. The results indicated that each ranking method has its own advantage. In this perspective, the weighted value and ambiguity based method gives more attention to uncertainty in ranking and evaluating ISQ as well as it takes into account cut sets of SVTrN numbers that can reflect the information on Truth-membership-membership degree, false membership-membership degree and Indeterminacy-membership degree. The value index and ambiguity index method can reflect the decision maker's subjectivity attitude to the SVTrN- numbers.

  17. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  18. Scoping review identifies significant number of knowledge translation theories, models and frameworks with limited use.

    Science.gov (United States)

    Strifler, Lisa; Cardoso, Roberta; McGowan, Jessie; Cogo, Elise; Nincic, Vera; Khan, Paul A; Scott, Alistair; Ghassemi, Marco; MacDonald, Heather; Lai, Yonda; Treister, Victoria; Tricco, Andrea C; Straus, Sharon E

    2018-04-13

    To conduct a scoping review of knowledge translation (KT) theories, models and frameworks that have been used to guide dissemination or implementation of evidence-based interventions targeted to prevention and/or management of cancer or other chronic diseases. We used a comprehensive multistage search process from 2000-2016, which included traditional bibliographic database searching, searching using names of theories, models and frameworks, and cited reference searching. Two reviewers independently screened the literature and abstracted data. We found 596 studies reporting on the use of 159 KT theories, models or frameworks. A majority (87%) of the identified theories, models or frameworks were used in five or fewer studies, with 60% used once. The theories, models and frameworks were most commonly used to inform planning/design, implementation and evaluation activities, and least commonly used to inform dissemination and sustainability/scalability activities. Twenty-six were used across the full implementation spectrum (from planning/design to sustainability/scalability) either within or across studies. All were used for at least individual-level behavior change, while 48% were used for organization-level, 33% for community-level and 17% for system-level change. We found a significant number of KT theories, models and frameworks with a limited evidence base describing their use. Copyright © 2018. Published by Elsevier Inc.

  19. Evaluation Model for Sentient Cities

    Directory of Open Access Journals (Sweden)

    Mª Florencia Fergnani Brion

    2016-11-01

    Full Text Available In this article we made a research about the Sentient Cities and produced an assessment model to analyse if a city is or could be potentially considered one. It can be used to evaluate the current situation of a city before introducing urban policies based on citizen participation in hybrid environments (physical and digital. To that effect, we've developed evaluation grids with the main elements that form a Sentient City and their measurement values. The Sentient City is a variation of the Smart City, also based on technology progress and innovation, but where the citizens are the principal agent. In this model, governments aim to have a participatory and sustainable system for achieving the Knowledge Society and Collective Intelligence development, as well as the city’s efficiency. Also, they increase communication channels between the Administration and citizens. In this new context, citizens are empowered because they have the opportunity to create a Local Identity and transform their surroundings through open and horizontal initiatives.

  20. The Spiral-Interactive Program Evaluation Model.

    Science.gov (United States)

    Khaleel, Ibrahim Adamu

    1988-01-01

    Describes the spiral interactive program evaluation model, which is designed to evaluate vocational-technical education programs in secondary schools in Nigeria. Program evaluation is defined; utility oriented and process oriented models for evaluation are described; and internal and external evaluative factors and variables that define each…

  1. Evaluation of Related Risk Factors in Number of Musculoskeletal Disorders Among Carpet Weavers in Iran.

    Science.gov (United States)

    Karimi, Nasim; Moghimbeigi, Abbas; Motamedzade, Majid; Roshanaei, Ghodratollah

    2016-12-01

    Musculoskeletal disorders (MSDs) are a common problem among carpet weavers. This study was undertaken to introduce affecting personal and occupational factors in developing the number of MSDs among carpet weavers. A cross-sectional study was performed among 862 weavers in seven towns with regard to workhouse location in urban or rural regions. Data were collected by using questionnaires that contain personal, workplace, and information tools and the modified Nordic MSDs questionnaire. Statistical analysis was performed by applying Poisson and negative binomial mixed models using a full Bayesian hierarchical approach. The deviance information criterion was used for comparison between models and model selection. The majority of weavers (72%) were female and carpet weaving was the main job of 85.2% of workers. The negative binomial mixed model with lowest deviance information criterion was selected as the best model. The criteria showed the convergence of chains. Based on 95% Bayesian credible interval, the main job and weaving type variables statistically affected the number of MSDs, but variables age, sex, weaving comb, work experience, and carpet weaving looms were not significant. According to the results of this study, it can be concluded that occupational factors are associated with the number of MSDs developing among carpet weavers. Thus, using standard tools and decreasing hours of work per day can reduce frequency of MSDs among carpet weavers.

  2. Variability and relationship among Mixolab and Falling Number evaluation based on influence of fungal α-amylase addition.

    Science.gov (United States)

    Codina, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel

    2012-08-15

    In bread-making technology, α-amylase activity is routinely measured with a Falling Number device to predict wheat flour quality. The aim of this study was to determine the possibility of using Mixolab parameters to assess the Falling Number (FN) index. The effects of different doses of fungal α-amylase addition on the Mixolab characteristics and FN index values were investigated. Principal component analysis was performed in order to illustrate the relationships between the Mixolab parameters and the FN index. To highlight the linear combination between the FN index values and the Mixolab parameters used to evaluate starch pasting properties (C3, C4, C5 and point differences C34 and C54), a multivariate prediction model was developed. Greatest precision (R = 0.728) was obtained for the linear regression FN = f(C4, C54) model. This model was tested on a different sample set than the one on which it was built. A high correlation was obtained between predictive model and measured FN index values (r = 0.896, P = 0.01). The model provides a framework to predict the evolution of the FN index, which is predicted by the torque for cooking stability (C4) and the difference between points C5 and C4 (C54). The obtained results suggested that the Mixolab device could be a reliable instrument for evaluation of the FN index values. Copyright © 2012 Society of Chemical Industry.

  3. Subscriber Number Forecasting Tool Based on Subscriber Attribute Distribution for Evaluating Improvement Strategies

    OpenAIRE

    Hiramatsu, Ayako; Shono, Yuji; Oiso, Hiroaki; Komoda, Norihisa

    2005-01-01

    In this paper, a subscriber number forecasting tool that evaluates quiz game mobile content improvement strategies is developed. Unsubscription rates depend on such subscriber attributes such as consecutive months, stages, rankings, and so on. In addition, content providers can anticipate change in unsubscription rates for each content improvement strategy. However, subscriber attributes change dynamically. Therefore, a method that deals with dynamic subscriber attribute changes is proposed. ...

  4. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  5. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  6. Endoscopic evaluation of food bolus formation and its relationship with the number of chewing cycles.

    Science.gov (United States)

    Fukatsu, H; Nohara, K; Kotani, Y; Tanaka, N; Matsuno, K; Sakai, T

    2015-08-01

    It is known that solid food is transported to the pharynx actively in parallel to it being crushed by chewing and mixed with saliva in the oral cavity. Therefore, food bolus formation should be considered to take place from the oral cavity to the pharynx. In previous studies, the chewed food was evaluated after the food had been removed from the oral cavity. However, it has been pointed out that spitting food out of the oral cavity interferes with natural food bolus formation. Therefore, we observed food boluses immediately before swallowing using an endoscope to establish a method to evaluate the food bolus-forming function, and simultaneously performed endoscopic evaluation of food bolus formation and its relationship with the number of chewing cycles. The subject was inserted the endoscope nasally and instructed to eat two coloured samples of boiled rice simultaneously in two ingestion conditions ('as usual' and 'chewing well'). The condition of the food bolus was graded into three categories for each item of grinding, mixing and aggregation and scored 2, 1 and 0. The score of aggregation was high under both ingestion conditions. The scores of grinding and mixing tended to be higher in subjects with a high number of chewing cycles, and the score of aggregation was high regardless of the number of chewing cycles. It was suggested that food has to be aggregated, even though the number of chewing cycles is low and the food is not ground or mixed for a food bolus to reach the swallowing threshold. © 2015 John Wiley & Sons Ltd.

  7. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  8. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  9. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  10. Interpretation of quarks having fractional quantum numbers as structural quasi-particles by means of the composite model with integral quantum numbers

    International Nuclear Information System (INIS)

    Tyapkin, A.A.

    1976-01-01

    The problem is raised on the interpretation of quarks having fractional quantum numbers as structural quasi-particles. A new composite model is proposed on the basis of the fundamental triplet representation of fermions having integral quantum numbers

  11. An evaluation of Tsyganenko magnetic field model

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1991-01-01

    A long-standing goal of magnetospheric physics has been to produce a model of the Earth's magnetic field that can accurately predict the field vector at all locations within the magnetosphere for all dipole tilt angles and for various solar wind or magnetic activity conditions. A number of models make such predictions, but some only for limited spatial regions, some only for zero tilt angle, and some only for arbitrary conditions. No models depend explicitly on solar wind conditions. A data set of more than 22,000 vector averages of the magnetosphere magnetic field over 0.5 R E regions is used to evaluate Tsyganenko's 1982 and 1987 magnetospheric magnetic field models. The magnetic field predicted by the model in various regions is compared to observations to find systematic discrepancies which future models might address. While agreement is generally good, discrepancies are noted which include: (1) a lack of adequate field line stretching in the tail and ring current regions; (2) an inability to predict weak enough fields in the polar cusps; and (3) a deficiency of Kp as a predictor of the field configuration

  12. Electrokinetic demonstration at Sandia National Laboratories: Use of transference numbers for site characterization and process evaluation

    International Nuclear Information System (INIS)

    Lindgren, E.R.; Mattson, E.D.

    1997-01-01

    Electrokinetic remediation is generally an in situ method using direct current electric potentials to move ionic contaminants and/or water to collection electrodes. The method has been extensively studied for application in saturated clayey soils. Over the past few years, an electrokinetic extraction method specific for sandy, unsaturated soils has been developed and patented by Sandia National Laboratories. A RCRA RD ampersand D permitted demonstration of this technology for the in situ removal of chromate contamination from unsaturated soils in a former chromic acid disposal pit was operated during the summer and fall of 1996. This large scale field test represents the first use of electrokinetics for the removal of heavy metal contamination from unsaturated soils in the United States and is part of the US EPA Superfund Innovative Technology Evaluation (SITE) Program. Guidelines for characterizing a site for electrokinetic remediation are lacking, especially for applications in unsaturated soil. The transference number of an ion is the fraction of the current carried by that ion in an electric field and represents the best measure of contaminant removal efficiency in most electrokinetic remediation processes. In this paper we compare the transference number of chromate initially present in the contaminated unsaturated soil, with the transference number in the electrokinetic process effluent to demonstrate the utility of evaluating this parameter

  13. Evaluation of Models of the Reading Process.

    Science.gov (United States)

    Balajthy, Ernest

    A variety of reading process models have been proposed and evaluated in reading research. Traditional approaches to model evaluation specify the workings of a system in a simplified fashion to enable organized, systematic study of the system's components. Following are several statistical methods of model evaluation: (1) empirical research on…

  14. An extension of compromise ranking method with interval numbers for the evaluation of renewable energy sources

    Directory of Open Access Journals (Sweden)

    M. Mousavi

    2014-06-01

    Full Text Available Evaluating and prioritizing appropriate renewable energy sources is inevitably a complex decision process. Various information and conflicting attributes should be taken into account. For this purpose, multi-attribute decision making (MADM methods can assist managers or decision makers in formulating renewable energy sources priorities by considering important objective and attributes. In this paper, a new extension of compromise ranking method with interval numbers is presented for the prioritization of renewable energy sources that is based on the performance similarity of alternatives to ideal solutions. To demonstrate the applicability of the proposed decision method, an application example is provided and the computational results are analyzed. Results illustrate that the presented method is viable in solving the evaluation and prioritization problem of renewable energy sources.

  15. Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung

    Science.gov (United States)

    Yong, Benny; Chin, Liem

    2017-05-01

    Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.

  16. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  17. The relationship between trading volumes, number of transactions, and stock volatility in GARCH models

    Science.gov (United States)

    Takaishi, Tetsuya; Chen, Ting Ting

    2016-08-01

    We examine the relationship between trading volumes, number of transactions, and volatility using daily stock data of the Tokyo Stock Exchange. Following the mixture of distributions hypothesis, we use trading volumes and the number of transactions as proxy for the rate of information arrivals affecting stock volatility. The impact of trading volumes or number of transactions on volatility is measured using the generalized autoregressive conditional heteroscedasticity (GARCH) model. We find that the GARCH effects, that is, persistence of volatility, is not always removed by adding trading volumes or number of transactions, indicating that trading volumes and number of transactions do not adequately represent the rate of information arrivals.

  18. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  19. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  20. Arbitrary Chern number generation in the three-band model from momentum space

    International Nuclear Information System (INIS)

    Lee, Soo-Yong; Go, Gyungchoon; Han, Jung Hoon; Park, Jin-Hong

    2015-01-01

    A simple, general rule for generating a three-band model with arbitrary Chern numbers is given. The rule is based on the idea of monopole charge-changing unitary operations and can be realized by two types of simple unitary operations on the original Hamiltonian. A pair of monopole charges are required to produce desired topological numbers in the three-band model. The set of rules presented here offers a way to produce lattice models of any desired Chern numbers for three-sublattice situations. (author)

  1. A Good Foundation for Number Learning for Five-Year-Olds? An Evaluation of the English Early Learning "Numbers" Goal in the Light of Research

    Science.gov (United States)

    Gifford, Sue

    2014-01-01

    This article sets out to evaluate the English Early Years Foundation Stage Goal for Numbers, in relation to research evidence. The Goal, which sets out to provide "a good foundation in mathematics", has greater breadth of content and higher levels of difficulty than previous versions. Research suggests that the additional expectations…

  2. An Instructional Model for Teaching Proof Writing in the Number Theory Classroom

    Science.gov (United States)

    Schabel, Carmen

    2005-01-01

    I discuss an instructional model that I have used in my number theory classes. Facets of the model include using small group work and whole class discussion, having students generate examples and counterexamples, and giving students the opportunity to write proofs and make conjectures in class. The model is designed to actively engage students in…

  3. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    Science.gov (United States)

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  4. Refined open intersection numbers and the Kontsevich-Penner matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrov, Alexander [Center for Geometry and Physics, Institute for Basic Science (IBS),Pohang 37673 (Korea, Republic of); Centre de Recherches Mathématiques (CRM), Université de Montréal,Montréal (Canada); Department of Mathematics and Statistics, Concordia University,Montréal (Canada); Institute for Theoretical and Experimental Physics (ITEP),Moscow (Russian Federation); Buryak, Alexandr [Department of Mathematics, ETH Zurich, Zurich (Switzerland); Tessler, Ran J. [Institute for Theoretical Studies, ETH Zurich,Zurich (Switzerland)

    2017-03-23

    A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.

  5. Refined open intersection numbers and the Kontsevich-Penner matrix model

    International Nuclear Information System (INIS)

    Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.

    2017-01-01

    A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.

  6. Baryon-number generation in supersymmetric unified models: the effect of supermassive fermions

    International Nuclear Information System (INIS)

    Kolb, E.W.; Raby, S.

    1983-01-01

    In supersymmetric unified models, baryon-number-violating reactions may be mediated by supermassive fermions in addition to the usual supermassive bosons. The effective low-energy baryon-number-violating cross section for fermion-mediated reactions is sigma/sub DeltaB/approx.g 4 /m 2 , where g is a coupling constant and m is the supermassive fermion mass, as opposed to sigma/sub DeltaB/approx.g 4 s/m 4 for scalar- or vector-mediated reactions (√s is the center-of-mass energy). Since the fermion-mediated cross section is larger at low energy, it is more effective at damping the baryon number produced in decay of the supermassive particles. In this paper we calculate baryon-number generation in models with fermion-mediated baryon-number-violating reactions, and discuss implications for supersymmetric model building

  7. [Decision modeling for economic evaluation of health technologies].

    Science.gov (United States)

    de Soárez, Patrícia Coelho; Soares, Marta Oliveira; Novaes, Hillegonda Maria Dutilh

    2014-10-01

    Most economic evaluations that participate in decision-making processes for incorporation and financing of technologies of health systems use decision models to assess the costs and benefits of the compared strategies. Despite the large number of economic evaluations conducted in Brazil, there is a pressing need to conduct an in-depth methodological study of the types of decision models and their applicability in our setting. The objective of this literature review is to contribute to the knowledge and use of decision models in the national context of economic evaluations of health technologies. This article presents general definitions about models and concerns with their use; it describes the main models: decision trees, Markov chains, micro-simulation, simulation of discrete and dynamic events; it discusses the elements involved in the choice of model; and exemplifies the models addressed in national economic evaluation studies of diagnostic and therapeutic preventive technologies and health programs.

  8. Using Bar Representations as a Model for Connecting Concepts of Rational Number.

    Science.gov (United States)

    Middleton, James A.; van den Heuvel-Panhuizen, Marja; Shew, Julia A.

    1998-01-01

    Examines bar models as graphical representations of rational numbers and presents related real life problems. Concludes that, through pairing the fraction bars with ratio tables and other ways of teaching numbers, numeric strategies become connected with visual strategies that allow students with diverse ways of thinking to share their…

  9. Evaluation of Blended Wing-Body Combinations with Curved Plan Forms at Mach Numbers Up to 3.50

    Science.gov (United States)

    Holdaway, George H.; Mellenthin, Jack A.

    1960-01-01

    This investigation is a continuation of the experimental and theoretical evaluation of the effects of wing plan-form variations on the aerodynamic performance characteristics of blended wing-body combinations. The present report compares previously tested straight-edged delta and arrow models which have leading-edge sweeps of 59.04 and 70-82 deg., respectively, with related models which have plan forms with curved leading and trailing edges designed to result in the same average sweeps in each case. All the models were symmetrical, without camber, and were generally similar having the same span, length, and aspect ratios. The wing sections had an average value of maximum thickness ratio of about 4 percent of the local wing chords in a streamwise direction. The wing sections were computed by varying their shapes along with the body radii (blending process) to match the selected area distribution and the given plan form. The models were tested with transition fixed at Reynolds numbers of roughly 4,000,000 to 9,000,000, based on the mean aerodynamic chord of the wing. The characteristic effect of the wing curvature of the delta and arrow models was an increase at subsonic and transonic speeds in the lift-curve slopes which was partially reflected in increased maximum lift-drag ratios. Curved edges were not evaluated on a diamond plan form because a preliminary investigation indicated that the curvature considered would increase the supersonic zero-lift wave drag. However, after the test program was completed, a suitable modification for the diamond plan form was discovered. The analysis presented in the appendix indicates that large reductions in the zero-lift wave drag would be obtained at supersonic Mach numbers if the leading- and trailing-edge sweeps are made to differ by indenting the trailing edge and extending the root of the leading edge.

  10. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  11. Supplier evaluation in manufacturing environment using compromise ranking method with grey interval numbers

    Directory of Open Access Journals (Sweden)

    Prasenjit Chatterjee

    2012-04-01

    Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.

  12. Dynamic model of cage induction motor with number of rotor bars as parameter

    Directory of Open Access Journals (Sweden)

    Gojko Joksimović

    2017-05-01

    Full Text Available A dynamic mathematical model, using number of rotor bars as parameter, is reached for cage induction motors through the use of coupled-circuits and the concept of winding functions. The exact MMFs waveforms are accounted for by the model which is derived in natural frames of reference. By knowing the initial motor parameters for a priori adopted number of stator slots and rotor bars model allows change of rotor bars number what results in new model parameters. During this process, the rated machine power, number of stator slots and stator winding scheme remain the same. Although presented model has a potentially broad application area it is primarily suitable for the analysis of the different stator/rotor slot combination on motor behaviour during the transients or in steady-state regime. The model is significant in its potential to provide analysis of dozen of different number of rotor bars in a few tens of minutes. Numerical example on cage rotor induction motor exemplifies this application, including three variants of number of rotor bars.

  13. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  14. Prediction Model of Interval Grey Numbers with a Real Parameter and Its Application

    Directory of Open Access Journals (Sweden)

    Bo Zeng

    2014-01-01

    Full Text Available Grey prediction models have become common methods which are widely employed to solve the problems with “small examples and poor information.” However, modeling objects of existing grey prediction models are limited to the homogenous data sequences which only contain the same data type. This paper studies the methodology of building prediction models of interval grey numbers that are grey heterogeneous data sequence, with a real parameter. Firstly, the position of the real parameter in an interval grey number sequence is discussed, and the real number is expanded into an interval grey number by adopting the method of grey generation. On this basis, a prediction model of interval grey number with a real parameter is deduced and built. Finally, this novel model is successfully applied to forecast the concentration of organic pollutant DDT in the atmosphere. The analysis and research results in this paper extend the object of grey prediction from homogenous data sequence to grey heterogeneous data sequence. Those research findings are of positive significance in terms of enriching and improving the theory system of grey prediction models.

  15. Low Mach and Peclet number limit for a model of stellar tachocline and upper radiative zones

    Directory of Open Access Journals (Sweden)

    Donatella Donatelli

    2016-09-01

    Full Text Available We study a hydrodynamical model describing the motion of internal stellar layers based on compressible Navier-Stokes-Fourier-Poisson system. We suppose that the medium is electrically charged, we include energy exchanges through radiative transfer and we assume that the system is rotating. We analyze the singular limit of this system when the Mach number, the Alfven number, the Peclet number and the Froude number approache zero in a certain way and prove convergence to a 3D incompressible MHD system with a stationary linear transport equation for transport of radiation intensity. Finally, we show that the energy equation reduces to a steady equation for the temperature corrector.

  16. Baryon number fluctuations and the phase structure in the PNJL model

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Guo-yun; Tang, Zhan-duo; Gao, Xue-yan; He, Wei-bo [Xi' an Jiaotong University, School of Science, Xi' an, Shaanxi (China)

    2018-02-15

    We investigate the kurtosis and skewness of net-baryon number fluctuations in the Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model, and discuss the relations between fluctuation distributions and the phase structure of quark-gluon matter. The calculation shows that the traces of chiral and deconfinement transitions can be effectively reflected by the kurtosis and skewness of net-baryon number fluctuations not only in the critical region but also in the crossover region. The contour plot of baryon number kurtosis derived in the PNJL model can qualitatively explain the behavior of net-proton number kurtosis in the STAR beam energy scan experiments. Moreover, the three-dimensional presentations of the kurtosis and skewness in this study are helpful to understand the relations between baryon number fluctuations and QCD phase structure. (orig.)

  17. [Evaluation of variable number of tandem repeats (VNTR) isolates of Mycobacterium bovis in Algeria].

    Science.gov (United States)

    Sahraoui, Naima; Muller, Borna; Djamel, Yala; Fadéla, Boulahbal; Rachid, Ouzrout; Jakob, Zinsstag; Djamel, Guetarni

    2010-01-01

    The discriminatory potency of variable number of tandem repeats (VNTR), based on 7 loci (MIRU 26, 27 and 5 ETRs A, B, C, D, E) was assayed on Mycobacterium bovis strains obtained from samples due to tuberculosis in two slaughterhouses in Algeria. The technique of MIRU-VNTR has been evaluated on 88 strains of M. bovis and one strain of M. caprea and shows 41 different profiles. Results showed that the VNTR were highly discriminatory with an allelic diversity of 0.930 when four loci (ETR A, B, C and MIRU 27) were highly discriminatory (h>0.25) and three loci (ETR D and E MIRU 26) moderately discriminatory (0.11VNTR loci were highly discriminatory be adequate for the first proper differentiation of strains of M. bovis in Algeria. The VNTR technique has proved a valuable tool for further development and application of epidemiological research for the of tuberculosis transmission in Algeria.

  18. Evaluation of lymph node numbers for adequate staging of Stage II and III colon cancer

    Directory of Open Access Journals (Sweden)

    Bumpers Harvey L

    2011-05-01

    Full Text Available Abstract Background Although evaluation of at least 12 lymph nodes (LNs is recommended as the minimum number of nodes required for accurate staging of colon cancer patients, there is disagreement on what constitutes an adequate identification of such LNs. Methods To evaluate the minimum number of LNs for adequate staging of Stage II and III colon cancer, 490 patients were categorized into groups based on 1-6, 7-11, 12-19, and ≥ 20 LNs collected. Results For patients with Stage II or III disease, examination of 12 LNs was not significantly associated with recurrence or mortality. For Stage II (HR = 0.33; 95% CI, 0.12-0.91, but not for Stage III patients (HR = 1.59; 95% CI, 0.54-4.64, examination of ≥20 LNs was associated with a reduced risk of recurrence within 2 years. However, examination of ≥20 LNs had a 55% (Stage II, HR = 0.45; 95% CI, 0.23-0.87 and a 31% (Stage III, HR = 0.69; 95% CI, 0.38-1.26 decreased risk of mortality, respectively. For each six additional LNs examined from Stage III patients, there was a 19% increased probability of finding a positive LN (parameter estimate = 0.18510, p Conclusions Thus, the 12 LN cut-off point cannot be supported as requisite in determining adequate staging of colon cancer based on current data. However, a minimum of 6 LNs should be examined for adequate staging of Stage II and III colon cancer patients.

  19. Model for modulated and chaotic waves in zero-Prandtl-number ...

    Indian Academy of Sciences (India)

    The effects of time-periodic forcing in a few-mode model for zero-Prandtl-number convection with rigid body rotation is investigated. The time-periodic modulation of the rotation rate about the vertical axis and gravity modulation are considered separately. In the presence of periodic variation of the rotation rate, the model ...

  20. Universal model of finite Reynolds number turbulent flow in channels and pipes

    NARCIS (Netherlands)

    L'vov, V.S.; Procaccia, I.; Rudenko, O.

    2008-01-01

    In this Letter, we suggest a simple and physically transparent analytical model of pressure driven turbulent wall-bounded flows at high but finite Reynolds numbers Re. The model provides an accurate quantitative description of the profiles of the mean-velocity and Reynolds stresses (second order

  1. Unsuppressed fermion-number violation at high temperature: An O(3) model

    International Nuclear Information System (INIS)

    Mottola, E.; Wipf, A.

    1989-01-01

    The O(3) nonlinear σ model in 1+1 dimensions, modified by an explicit symmetry-breaking term, is presented as a model for baryon- and lepton-number violation in the standard electroweak theory. Although arguments based on the Atiyah-Singer index theorem and instanton physics apply to the model, we show by explicit calculations that the rate of chiral fermion-number violation due to the axial anomaly is entirely unsuppressed at sufficiently high temperatures. Our results apply to unbroken gauge theories as well and may require reevaluation of the role of instantons in high-temperature QCD

  2. Educational game models: conceptualization and evaluation ...

    African Journals Online (AJOL)

    Educational game models: conceptualization and evaluation. ... The Game Object Model (GOM), that marries educational theory and game design, forms the basis for the development of the Persona Outlining ... AJOL African Journals Online.

  3. Evaluation of uncertainties in femtoampere current measurement for the number concentration standard of aerosol nanoparticles

    International Nuclear Information System (INIS)

    Sakurai, Hiromu; Ehara, Kensei

    2011-01-01

    We evaluated uncertainties in current measurement by the electrometer at the current level on the order of femtoamperes. The electrometer was the one used in the Faraday-cup aerosol electrometer of the Japanese national standard for number concentration of aerosol nanoparticles in which the accuracy of the absolute current is not required, but the net current which is obtained as the difference in currents under two different conditions must be measured accurately. The evaluation was done experimentally at the current level of 20 fA, which was much smaller than the intervals between the electrometer's calibration points at +1, +0.5, −0.5 and −1 pA. The slope of the response curve for the relationship between the 'true' and measured current, which is crucial in the above measurement, was evaluated locally at many different points within the ±1 pA range for deviation from the slope determined by a linear regression of the calibration data. The sum of the current induced by a flow of charged particles and a bias current from a current-source instrument was measured by the electrometer while the particle current was toggled on and off. The net particle current was obtained as the difference in the measured currents between the toggling, while at the same time the current was estimated from the particle concentration read by a condensation particle counter. The local slope was calculated as the ratio of the measured to estimated currents at each bias current setting. The standard deviation of the local slope values observed at varied bias currents was about 0.003, which was calculated by analysis of variance (ANOVA) for the treatment of the bias current. The combined standard uncertainty of the slope, which was calculated from the uncertainty of the slope by linear regression and the variability of the slope, was calculated to be about 0.004

  4. Comparative study of measured and modelled number concentrations of nanoparticles in an urban street canyon

    DEFF Research Database (Denmark)

    Kumar, Prashant; Garmory, Andrew; Ketzel, Matthias

    2009-01-01

    Pollution Model (OSPM) and Computational Fluid Dynamics (CFD) code FLUENT. All models disregarded any particle dynamics. CFD simulations have been carried out in a simplified geometry of the selected street canyon. Four different sizes of emission sources have been used in the CFD simulations to assess......This study presents a comparison between measured and modelled particle number concentrations (PNCs) in the 10-300 nm size range at different heights in a canyon. The PNCs were modelled using a simple modelling approach (modified Box model, including vertical variation), an Operational Street...... the effect of source size on mean PNC distributions in the street canyon. The measured PNCs were between a factor of two and three of those from the three models, suggesting that if the model inputs are chosen carefully, even a simplified approach can predict the PNCs as well as more complex models. CFD...

  5. The EMEFS model evaluation. An interim report

    Energy Technology Data Exchange (ETDEWEB)

    Barchet, W.R. [Pacific Northwest Lab., Richland, WA (United States); Dennis, R.L. [Environmental Protection Agency, Research Triangle Park, NC (United States); Seilkop, S.K. [Analytical Sciences, Inc., Durham, NC (United States); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. [Atmospheric Environment Service, Downsview, ON (Canada); Byun, D.; McHenry, J.N. [Computer Sciences Corp., Research Triangle Park, NC (United States); Karamchandani, P.; Venkatram, A. [ENSR Consulting and Engineering, Camarillo, CA (United States); Fung, C.; Misra, P.K. [Ontario Ministry of the Environment, Toronto, ON (Canada); Hansen, D.A. [Electric Power Research Inst., Palo Alto, CA (United States); Chang, J.S. [State Univ. of New York, Albany, NY (United States). Atmospheric Sciences Research Center

    1991-12-01

    The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.

  6. Evaluation of a decontamination model

    International Nuclear Information System (INIS)

    Rippin, D.W.T.; Hanulik, J.; Schenker, E.; Ullrich, G.

    1981-02-01

    In the scale-up of a laboratory decontamination process difficulties arise due to the limited understanding of the mechanisms controlling the process. This paper contains some initial proposals which may contribute to the quantitative understanding of the chemical and physical factors which influence decontamination operations. General features required in a mathematical model to describe a fluid-solid reaction are discussed, and initial work is presented with a simple model which has had some success in describing the observed laboratory behaviour. (Auth.)

  7. Evaluation of the Soil Conservation Service curve number methodology using data from agricultural plots

    Science.gov (United States)

    Lal, Mohan; Mishra, S. K.; Pandey, Ashish; Pandey, R. P.; Meena, P. K.; Chaudhary, Anubhav; Jha, Ranjit Kumar; Shreevastava, Ajit Kumar; Kumar, Yogendra

    2017-01-01

    The Soil Conservation Service curve number (SCS-CN) method, also known as the Natural Resources Conservation Service curve number (NRCS-CN) method, is popular for computing the volume of direct surface runoff for a given rainfall event. The performance of the SCS-CN method, based on large rainfall (P) and runoff (Q) datasets of United States watersheds, is evaluated using a large dataset of natural storm events from 27 agricultural plots in India. On the whole, the CN estimates from the National Engineering Handbook (chapter 4) tables do not match those derived from the observed P and Q datasets. As a result, the runoff prediction using former CNs was poor for the data of 22 (out of 24) plots. However, the match was little better for higher CN values, consistent with the general notion that the existing SCS-CN method performs better for high rainfall-runoff (high CN) events. Infiltration capacity (fc) was the main explanatory variable for runoff (or CN) production in study plots as it exhibited the expected inverse relationship between CN and fc. The plot-data optimization yielded initial abstraction coefficient (λ) values from 0 to 0.659 for the ordered dataset and 0 to 0.208 for the natural dataset (with 0 as the most frequent value). Mean and median λ values were, respectively, 0.030 and 0 for the natural rainfall-runoff dataset and 0.108 and 0 for the ordered rainfall-runoff dataset. Runoff estimation was very sensitive to λ and it improved consistently as λ changed from 0.2 to 0.03.

  8. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  9. The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model

    Science.gov (United States)

    Kohl, R.

    The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.

  10. Statistical Modeling of the Trends Concerning the Number of Hospitals and Medical Centres in Romania

    Directory of Open Access Journals (Sweden)

    Gabriela OPAIT

    2017-04-01

    Full Text Available This study reveals the technique for to achive the shapes of the mathematical models which put in evidence the distributions of the values concerning the number of Hospitals, respectively Medical Centres, in our country, in the time horizon 2005-2014. In the same time, we can to observe the algorithm applied for to construct forecasts about the evolutions regarding the number of Hospitals and Medical Centres in Romania.

  11. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    Science.gov (United States)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  12. Empirically evaluating decision-analytic models.

    Science.gov (United States)

    Goldhaber-Fiebert, Jeremy D; Stout, Natasha K; Goldie, Sue J

    2010-08-01

    Model-based cost-effectiveness analyses support decision-making. To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. We developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices. As an illustration, we applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals. The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3). To support external and projective validity, cost-effectiveness models should be iteratively evaluated as new studies become available, with reporting standardized to facilitate assessment. Such evaluations are particularly relevant for models used to conduct comparative effectiveness analyses.

  13. Effect of hydration repulsion on nanoparticle agglomeration evaluated via a constant number Monte–Carlo simulation

    International Nuclear Information System (INIS)

    Liu, Haoyang Haven; Lanphere, Jacob; Walker, Sharon; Cohen, Yoram

    2015-01-01

    The effect of hydration repulsion on the agglomeration of nanoparticles in aqueous suspensions was investigated via the description of agglomeration by the Smoluchowski coagulation equation using constant number Monte–Carlo simulation making use of the classical DLVO theory extended to include the hydration repulsion energy. Evaluation of experimental DLS measurements for TiO 2 , CeO 2 , SiO 2 , and α-Fe 2 O 3 (hematite) at high IS (up to 900 mM) or low |ζ-potential| (≥1.35 mV) demonstrated that hydration repulsion energy can be above electrostatic repulsion energy such that the increased overall repulsion energy can significantly lower the agglomerate diameter relative to the classical DLVO prediction. While the classical DLVO theory, which is reasonably applicable for agglomeration of NPs of high |ζ-potential| (∼>35 mV) in suspensions of low IS (∼<1 mM), it can overpredict agglomerate sizes by up to a factor of 5 at high IS or low |ζ-potential|. Given the potential important role of hydration repulsion over a range of relevant conditions, there is merit in quantifying this repulsion energy over a wide range of conditions as part of overall characterization of NP suspensions. Such information would be of relevance to improved understanding of NP agglomeration in aqueous suspensions and its correlation with NP physicochemical and solution properties. (paper)

  14. Site descriptive modelling - strategy for integrated evaluation

    International Nuclear Information System (INIS)

    Andersson, Johan

    2003-02-01

    The current document establishes the strategy to be used for achieving sufficient integration between disciplines in producing Site Descriptive Models during the Site Investigation stage. The Site Descriptive Model should be a multidisciplinary interpretation of geology, rock mechanics, thermal properties, hydrogeology, hydrogeochemistry, transport properties and ecosystems using site investigation data from deep bore holes and from the surface as input. The modelling comprise the following iterative steps, evaluation of primary data, descriptive and quantitative modelling (in 3D), overall confidence evaluation. Data are first evaluated within each discipline and then the evaluations are checked between the disciplines. Three-dimensional modelling (i.e. estimating the distribution of parameter values in space and its uncertainty) is made in a sequence, where the geometrical framework is taken from the geological model and in turn used by the rock mechanics, thermal and hydrogeological modelling etc. The three-dimensional description should present the parameters with their spatial variability over a relevant and specified scale, with the uncertainty included in this description. Different alternative descriptions may be required. After the individual discipline modelling and uncertainty assessment a phase of overall confidence evaluation follows. Relevant parts of the different modelling teams assess the suggested uncertainties and evaluate the feedback. These discussions should assess overall confidence by, checking that all relevant data are used, checking that information in past model versions is considered, checking that the different kinds of uncertainty are addressed, checking if suggested alternatives make sense and if there is potential for additional alternatives, and by discussing, if appropriate, how additional measurements (i.e. more data) would affect confidence. The findings as well as the modelling results are to be documented in a Site Description

  15. Communicating about quantity without a language model: number devices in homesign grammar.

    Science.gov (United States)

    Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan

    2013-01-01

    All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Society by Numbers : Studies on Model-Based Explanations in the Social Sciences

    OpenAIRE

    Kuorikoski, Jaakko

    2010-01-01

    The aim of this dissertation is to provide conceptual tools for the social scientist for clarifying, evaluating and comparing explanations of social phenomena based on formal mathematical models. The focus is on relatively simple theoretical models and simulations, not statistical models. These studies apply a theory of explanation according to which explanation is about tracing objective relations of dependence, knowledge of which enables answers to contrastive why and how-questions. This th...

  17. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  18. Bayesian model to detect phenotype-specific genes for copy number data

    Directory of Open Access Journals (Sweden)

    González Juan R

    2012-06-01

    Full Text Available Abstract Background An important question in genetic studies is to determine those genetic variants, in particular CNVs, that are specific to different groups of individuals. This could help in elucidating differences in disease predisposition and response to pharmaceutical treatments. We propose a Bayesian model designed to analyze thousands of copy number variants (CNVs where only few of them are expected to be associated with a specific phenotype. Results The model is illustrated by analyzing three major human groups belonging to HapMap data. We also show how the model can be used to determine specific CNVs related to response to treatment in patients diagnosed with ovarian cancer. The model is also extended to address the problem of how to adjust for confounding covariates (e.g., population stratification. Through a simulation study, we show that the proposed model outperforms other approaches that are typically used to analyze this data when analyzing common copy-number polymorphisms (CNPs or complex CNVs. We have developed an R package, called bayesGen, that implements the model and estimating algorithms. Conclusions Our proposed model is useful to discover specific genetic variants when different subgroups of individuals are analyzed. The model can address studies with or without control group. By integrating all data in a unique model we can obtain a list of genes that are associated with a given phenotype as well as a different list of genes that are shared among the different subtypes of cases.

  19. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    Science.gov (United States)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  20. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    Science.gov (United States)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  1. Rock mechanics models evaluation report: Draft report

    International Nuclear Information System (INIS)

    1985-10-01

    This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The end result of the KT analysis is a balanced, documented recommendation of the codes and models which are best suited to conceptual subsurface design for the salt repository. The various laws for modeling the creep of rock salt are also reviewed in this report. 37 refs., 1 fig., 7 tabs

  2. Droplet number uncertainties associated with CCN: an assessment using observations and a global model adjoint

    Directory of Open Access Journals (Sweden)

    R. H. Moore

    2013-04-01

    Full Text Available We use the Global Modelling Initiative (GMI chemical transport model with a cloud droplet parameterisation adjoint to quantify the sensitivity of cloud droplet number concentration to uncertainties in predicting CCN concentrations. Published CCN closure uncertainties for six different sets of simplifying compositional and mixing state assumptions are used as proxies for modelled CCN uncertainty arising from application of those scenarios. It is found that cloud droplet number concentrations (Nd are fairly insensitive to the number concentration (Na of aerosol which act as CCN over the continents (∂lnNd/∂lnNa ~10–30%, but the sensitivities exceed 70% in pristine regions such as the Alaskan Arctic and remote oceans. This means that CCN concentration uncertainties of 4–71% translate into only 1–23% uncertainty in cloud droplet number, on average. Since most of the anthropogenic indirect forcing is concentrated over the continents, this work shows that the application of Köhler theory and attendant simplifying assumptions in models is not a major source of uncertainty in predicting cloud droplet number or anthropogenic aerosol indirect forcing for the liquid, stratiform clouds simulated in these models. However, it does highlight the sensitivity of some remote areas to pollution brought into the region via long-range transport (e.g., biomass burning or from seasonal biogenic sources (e.g., phytoplankton as a source of dimethylsulfide in the southern oceans. Since these transient processes are not captured well by the climatological emissions inventories employed by current large-scale models, the uncertainties in aerosol-cloud interactions during these events could be much larger than those uncovered here. This finding motivates additional measurements in these pristine regions, for which few observations exist, to quantify the impact (and associated uncertainty of transient aerosol processes on cloud properties.

  3. Evaluation of animal models of neurobehavioral disorders

    Directory of Open Access Journals (Sweden)

    Nordquist Rebecca E

    2009-02-01

    Full Text Available Abstract Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to

  4. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  5. Modelling the number of viable vegetative cells of Bacillus cereus passing through the stomach

    NARCIS (Netherlands)

    Wijnands, L.M.; Pielaat, A.; Dufrenne, J.B.; Zwietering, M.H.; Leusden, van F.M.

    2009-01-01

    Aims: Model the number of viable vegetative cells of B. cereus surviving the gastric passage after experiments in simulated gastric conditions. Materials and Methods: The inactivation of stationary and exponential phase vegetative cells of twelve different strains of Bacillus cereus, both mesophilic

  6. The type-reproduction number T in models for infectious disease control

    NARCIS (Netherlands)

    Heesterbeek, J.A.P.; Roberts, M.G.

    A ubiquitous quantity in epidemic modelling is the basic reproduction number R0. This became so popular in the 1990s that ‘All you need know is R0!’ became a familiar catch-phrase. The value of R0 defines, among other things, the control effort needed to eliminate the infection from a homogeneous

  7. Dependence of the number of dealers in a stochastic dealer model

    Science.gov (United States)

    Yamada, Kenta; Takayasu, Hideki; Takayasu, Misako

    2010-04-01

    We numerically analyze an artificial market model consisted of N dealers with time dependent stochastic strategy. Observing the change of market price statistics for different values of N, it is shown that the statistical properties are almost same when the dealer number is larger than about 30.

  8. Reduction of the number of parameters needed for a polynomial random regression test-day model

    NARCIS (Netherlands)

    Pool, M.H.; Meuwissen, T.H.E.

    2000-01-01

    Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation

  9. Modeling of low-capillary number segmented flows in microchannels using OpenFOAM

    NARCIS (Netherlands)

    Hoang, D.A.; Van Steijn, V.; Portela, L.M.; Kreutzer, M.T.; Kleijn, C.R.

    2012-01-01

    Modeling of low-Capillary number segmented flows in microchannels is important for the design of microfluidic devices. We present numerical validations of microfluidic flow simulations using the volume-of-fluid (VOF) method as implemented in OpenFOAM. Two benchmark cases were investigated to ensure

  10. Analysis of the relationship between the number of citations and the quality evaluated by experts in psychology journals.

    Science.gov (United States)

    Buela-Casal, Gualberto; Zych, Izabela

    2010-05-01

    The study analyzes the relationship between the number of citations as calculated by the IN-RECS database and the quality evaluated by experts. The articles published in journals of the Spanish Psychological Association between 1996 and 2008 and selected by the Editorial Board of Psychology in Spain were the subject of the study. Psychology in Spain is a journal that includes the best papers published throughout the previous year, chosen by the Editorial Board made up of fifty specialists of acknowledged prestige within Spanish psychology and translated into English. The number of the citations of the 140 original articles republished in Psychology in Spain was compared to the number of the citations of the 140 randomly selected articles. Additionally, the study searched for a relationship between the number of the articles selected from each journal and their mean number of citations. The number of citations received by the best articles as evaluated by experts is significantly higher than the number of citations of the randomly selected articles. Also, the number of citations is higher in the articles from the most frequently selected journals. A statistically significant relation between the quality evaluated by experts and the number of the citations was found.

  11. On global and regional spectral evaluation of global geopotential models

    International Nuclear Information System (INIS)

    Ustun, A; Abbak, R A

    2010-01-01

    Spectral evaluation of global geopotential models (GGMs) is necessary to recognize the behaviour of gravity signal and its error recorded in spherical harmonic coefficients and associated standard deviations. Results put forward in this wise explain the whole contribution of gravity data in different kinds that represent various sections of the gravity spectrum. This method is more informative than accuracy assessment methods, which use external data such as GPS-levelling. Comparative spectral evaluation for more than one model can be performed both in global and local sense using many spectral tools. The number of GGMs has grown with the increasing number of data collected by the dedicated satellite gravity missions, CHAMP, GRACE and GOCE. This fact makes it necessary to measure the differences between models and to monitor the improvements in the gravity field recovery. In this paper, some of the satellite-only and combined models are examined in different scales, globally and regionally, in order to observe the advances in the modelling of GGMs and their strengths at various expansion degrees for geodetic and geophysical applications. The validation of the published errors of model coefficients is a part of this evaluation. All spectral tools explicitly reveal the superiority of the GRACE-based models when compared against the models that comprise the conventional satellite tracking data. The disagreement between models is large in local/regional areas if data sets are different, as seen from the example of the Turkish territory

  12. Construction of a voxel model from CT images with density derived from CT numbers

    International Nuclear Information System (INIS)

    Cheng Mengyun; Zeng Qin; Cao Ruifen; Li Gui; Zheng Huaqing; Huang Shanqing; Song Gang; Wu Yican

    2010-01-01

    The voxel models representing human anatomy have been developed to calculate dose distribution in human body, while the density is the most important physical property of voxel model. Traditionally, when creating the Monte Carlo input files, the average tissue parameters recommended in ICRP report were used to assign each voxel in the existing voxel models. However, as each tissue consists of many voxels in which voxels are different in their densities, the method of assigning average tissue parameters doesn't take account of the voxel's discrepancy, and can't represent human anatomy faithfully. To represent human anatomy more faithfully, a method was implemented to assign each voxel, the density of which was derived from CT number. In order to compare with the traditional method, we have constructed two models from a same cadaver specimen date set. A CT-based pelvic voxel model called Pelvis-CT model, was constructed, the densities of which were derived from the CT numbers. A color photograph-based pelvic voxel model called Pelvis-Photo model, was also constructed, the densities of which were taken from ICRP Publication. The CT images and color photographs were obtained from the same female cadaver specimen. The Pelvis-CT and Pelvis-Photo models were ported into Monte Carlo code MCNP to calculate the conversion coefficients from kerma free-in-air to absorbed dose for external monoenergetic photon beams with energies of 0.1, 1 and 10 MeV under anterior-posterior (AP) geometries. The results were compared with those of given in ICRP74. Differences of up to 50% were observed between conversion coefficients of Pelvis-CT and Pelvis-Photo models, moreover the discrepancies decreased for the photon beams with higher energies. The overall trend of conversion coefficients of the Pelvis-CT model were agreed well with that of ICRP74 data. (author)

  13. Construction of a voxel model from CT images with density derived from CT numbers

    International Nuclear Information System (INIS)

    Cheng Mengyun; Zeng Qin; Cao Ruifen; Li Gui; Zheng Huaqing; Huang Shanqing; Song Gang; Wu Yican

    2011-01-01

    The voxel models representing human anatomy have been developed to calculate dose distribution in human body, while the density and elemental composition are the most important physical properties of voxel model. Usually, when creating the Monte Carlo input files, the average tissue densities recommended in ICRP Publication were used to assign each voxel in the existing voxel models. As each tissue consists of many voxels with different densities, the conventional method of average tissue densities failed to take account of the voxel's discrepancy, and therefore could not represent human anatomy faithfully. To represent human anatomy more faithfully, a method was implemented to assign each voxel, the densities of which were derived from CT number. In order to compare with the traditional method, we constructed two models from the cadaver specimen dataset. A CT-based pelvic voxel model called Pelvis-CT model was constructed, the densities of which were derived from the CT numbers. A color photograph-based pelvic voxel model called Pelvis-Photo model was also constructed, the densities of which were taken from ICRP Publication. The CT images and the color photographs were obtained from the same female cadaver specimen. The Pelvis-CT and Pelvis-Photo models were both ported into Monte Carlo code MCNP to calculate the conversion coefficients from kerma free-in-air to absorbed dose for external monoenergetic photon beams with energies of 0.1, 1 and 10 MeV under anterior-posterior (AP) geometry. The results were compared with those of given in ICRP Publication 74. Differences of up to 50% were observed between conversion coefficients of Pelvis-CT and Pelvis- Photo models, moreover the discrepancies decreased for the photon beams with higher energies. The overall trend of conversion coefficients of the Pelvis-CT model agreed well with that of ICRP Publication 74 data. (author)

  14. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    Science.gov (United States)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  15. Periodontal Dressing-containing Green Tea Epigallocathechin gallate Increases Fibroblasts Number in Gingival Artifical Wound Model

    Directory of Open Access Journals (Sweden)

    Ardisa U. Pradita

    2014-04-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Green tea leaf (Camellia sinensis is one of herbal plants that is used for traditional medicine. Epigallocatechin gallate (EGCG in green tea is the most potential polyphenol component and has the strongest biological activity. It is known that EGCG has potential effect on wound healing. Objective: This study aimed to determine the effect of adding green tea EGCG into periodontal dressing on the number of fibroblasts after gingival artificial wound in animal model. Methods: Gingival artifical wound model was performed using 2mm punch biopsy on 24 rabbits (Oryctolagus cuniculus. The animals were divided into two groups. Periodontal dressing with EGCG and without EGCG was applied to the experimental and control group, respectively. Decapitation period was scheduled at day 3, 5, and 7 after treatment. Histological analysis to count the number of fibroblasts was performed. Results: Number of fibroblasts was significantly increased in time over the experimental group treated with EGCG periodontal dressing compared to control (p<0.05. Conclusion: EGCG periodontal dressing could increase the number of fibroblast, therefore having role in wound healing after periodontal surgery in animal model.DOI: 10.14693/jdi.v20i3.197

  16. Numberical Calculations of Atmospheric Conditions over Tibetan Plateau by Using WRF Model

    International Nuclear Information System (INIS)

    Qian, Xuan; Yao, Yongqiang; Wang, Hongshuai; Liu, Liyong; Li, Junrong; Yin, Jia

    2015-01-01

    The wind field, precipitable water vapor are analyzed by using the mesoscale numerical model WRF over Tibetan Plateau, and the aerosol is analyzed by using WRF- CHEM model. The spatial and vertical distributions of the relevant atmospheric factors are summarized, providing truth evidence for selecting and further evaluating an astronomical site. It has been showed that this method could provide good evaluation of atmospheric conditions. This study serves as a further demonstration towards astro-climate regionalization, and provides with essential database for astronomical site survey over Tibetan Plateau. (paper)

  17. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  18. Evaluation of green house gas emissions models.

    Science.gov (United States)

    2014-11-01

    The objective of the project is to evaluate the GHG emissions models used by transportation agencies and industry leaders. Factors in the vehicle : operating environment that may affect modal emissions, such as, external conditions, : vehicle fleet c...

  19. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna

    2009-01-01

    established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic

  20. Revamping the Teacher Evaluation Process. Education Policy Brief. Volume 9, Number 4, Fall 2011

    Science.gov (United States)

    Whiteman, Rodney S.; Shi, Dingjing; Plucker, Jonathan A.

    2011-01-01

    This policy brief explores Senate Enrolled Act 001 (SEA 1), specifically the provisions for how teachers must be evaluated. After a short summary of SEA 1 and its direct changes to evaluation policies and practices, the brief reviews literature in teacher evaluation and highlights important issues for school corporations to consider when selecting…

  1. Metrics for Evaluation of Student Models

    Science.gov (United States)

    Pelanek, Radek

    2015-01-01

    Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…

  2. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    Energy Technology Data Exchange (ETDEWEB)

    Sari, Salih [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Erguen, Sule [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey)

    2009-03-15

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data.

  3. Modeling of isothermal bubbly flow with interfacial area transport equation and bubble number density approach

    International Nuclear Information System (INIS)

    Sari, Salih; Erguen, Sule; Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi

    2009-01-01

    In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data

  4. Unified theory to evaluate the effect of concentration difference and Peclet number on electroosmotic mobility error of micro electroosmotic flow

    KAUST Repository

    Wang, Wentao; Lee, Yi Kuen

    2012-01-01

    Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model

  5. Orphan Drug Pricing: An Original Exponential Model Relating Price to the Number of Patients

    Directory of Open Access Journals (Sweden)

    Andrea Messori

    2016-10-01

    Full Text Available In managing drug prices at the national level, orphan drugs represent a special case because the price of these agents is higher than that determined according to value-based principles. A common practice is to set the orphan drug price in an inverse relationship with the number of patients, so that the price increases as the number of patients decreases. Determination of prices in this context generally has a purely empirical nature, but a theoretical basis would be needed. The present paper describes an original exponential model that manages the relationship between price and number of patients for orphan drugs. Three real examples are analysed in detail (eculizumab, bosentan, and a data set of 17 orphan drugs published in 2010. These analyses have been aimed at identifying some objective criteria to rationally inform this relationship between prices and patients and at converting these criteria into explicit quantitative rules.

  6. modelling for optimal number of line storage reservoirs in a water

    African Journals Online (AJOL)

    user

    RESERVOIRS IN A WATER DISTRIBUTION SYSTEM. By. B.U. Anyata. Department ... water distribution systems, in order to balance the ... distribution line storage systems to meet peak demands at .... Evaluation Method. The criteria ... Pipe + Energy Cost (N). 191, 772 ... Economic Planning Model for Distributed information ...

  7. Evaluating the change in fingerprint directional patterns under variation of rotation and number of regions

    CSIR Research Space (South Africa)

    Dorasamy, K

    2015-09-01

    Full Text Available Directional Patterns, which are formed by grouping regions of orientation fields falling within a specific range, vary under rotation and the number of regions. For fingerprint classification schemes, this can result in missclassification due...

  8. Linear programming models and methods of matrix games with payoffs of triangular fuzzy numbers

    CERN Document Server

    Li, Deng-Feng

    2016-01-01

    This book addresses two-person zero-sum finite games in which the payoffs in any situation are expressed with fuzzy numbers. The purpose of this book is to develop a suite of effective and efficient linear programming models and methods for solving matrix games with payoffs in fuzzy numbers. Divided into six chapters, it discusses the concepts of solutions of matrix games with payoffs of intervals, along with their linear programming models and methods. Furthermore, it is directly relevant to the research field of matrix games under uncertain economic management. The book offers a valuable resource for readers involved in theoretical research and practical applications from a range of different fields including game theory, operational research, management science, fuzzy mathematical programming, fuzzy mathematics, industrial engineering, business and social economics. .

  9. Evaluation by electronic paramagnetic resonance of the number of free radicals produced in irradiated rat bone

    International Nuclear Information System (INIS)

    Marble, G.; Valderas, R.

    1966-01-01

    The number of long half-life free radicals created by gamma irradiation in the bones of the rat has been determined from the electrons paramagnetic resonance spectrum. This number decreases slowly with time (calculated half life: 24 days). It is proportional to the dose of gamma radiation given to the rat. The method could find interesting applications in the field of biological dosimetry. (authors) [fr

  10. An evaluation of BPMN modeling tools

    NARCIS (Netherlands)

    Yan, Z.; Reijers, H.A.; Dijkman, R.M.; Mendling, J.; Weidlich, M.

    2010-01-01

    Various BPMN modeling tools are available and it is close to impossible to understand their functional differences without simply trying them out. This paper presents an evaluation framework and presents the outcomes of its application to a set of five BPMN modeling tools. We report on various

  11. Estimation Curve Numbers using GIS and Hec-GeoHMS Model

    Directory of Open Access Journals (Sweden)

    Hayat Kareem Shukur

    2017-05-01

    Full Text Available Recently, the development and application of the hydrological models based on Geographical Information System (GIS has increased around the world. One of the most important applications of GIS is mapping the Curve Number (CN of a catchment. In this research, three softwares, such as an ArcView GIS 9.3 with ArcInfo, Arc Hydro Tool and Geospatial Hydrologic Modeling Extension (Hec-GeoHMS model for ArcView GIS 9.3, were used to calculate CN of (19210 ha Salt Creek watershed (SC which is located in Osage County, Oklahoma, USA. Multi layers were combined and examined using the Environmental Systems Research Institute (ESRI ArcMap 2009. These layers are soil layer (Soil Survey Geographic SSURGO, 30 m x 30 m resolution of Digital Elevation Model (DEM, land use layer (LU, “Look–Up tables” and other layers resulted from running the software. Curve Number which expresses a catchment’s response to a storm event has been estimated in this study to each land parcel based on LU layer and soil layer within each parcel. The results showed that a CN of 100 (dark Blue means surface water. The high curve numbers (100 -81 (Blue and light Blue corresponding to urbanized areas means high runoff and low infiltration; whereas low curve numbers (77- 58 (Brown and light Brown corresponding to the forested area means low runoff and high infiltration. Four classes of land cover have been identified; these are surface water, medium residential, forest and agriculture.

  12. Classification of human cancers based on DNA copy number amplification modeling

    Directory of Open Access Journals (Sweden)

    Knuutila Sakari

    2008-05-01

    Full Text Available Abstract Background DNA amplifications alter gene dosage in cancer genomes by multiplying the gene copy number. Amplifications are quintessential in a considerable number of advanced cancers of various anatomical locations. The aims of this study were to classify human cancers based on their amplification patterns, explore the biological and clinical fundamentals behind their amplification-pattern based classification, and understand the characteristics in human genomic architecture that associate with amplification mechanisms. Methods We applied a machine learning approach to model DNA copy number amplifications using a data set of binary amplification records at chromosome sub-band resolution from 4400 cases that represent 82 cancer types. Amplification data was fused with background data: clinical, histological and biological classifications, and cytogenetic annotations. Statistical hypothesis testing was used to mine associations between the data sets. Results Probabilistic clustering of each chromosome identified 111 amplification models and divided the cancer cases into clusters. The distribution of classification terms in the amplification-model based clustering of cancer cases revealed cancer classes that were associated with specific DNA copy number amplification models. Amplification patterns – finite or bounded descriptions of the ranges of the amplifications in the chromosome – were extracted from the clustered data and expressed according to the original cytogenetic nomenclature. This was achieved by maximal frequent itemset mining using the cluster-specific data sets. The boundaries of amplification patterns were shown to be enriched with fragile sites, telomeres, centromeres, and light chromosome bands. Conclusions Our results demonstrate that amplifications are non-random chromosomal changes and specifically selected in tumor tissue microenvironment. Furthermore, statistical evidence showed that specific chromosomal features

  13. A new modeling and solution approach for the number partitioning problem

    Directory of Open Access Journals (Sweden)

    Bahram Alidaee

    2005-01-01

    Full Text Available The number partitioning problem has proven to be a challenging problem for both exact and heuristic solution methods. We present a new modeling and solution approach that consists of recasting the problem as an unconstrained quadratic binary program that can be solved by efficient metaheuristic methods. Our approach readily accommodates both the common two-subset partition case as well as the more general case of multiple subsets. Preliminary computational experience is presented illustrating the attractiveness of the method.

  14. Determination of the Number of Fixture Locating Points for Sheet Metal By Grey Model

    Directory of Open Access Journals (Sweden)

    Yang Bo

    2017-01-01

    Full Text Available In the process of the traditional fixture design for sheet metal part based on the "N-2-1" locating principle, the number of fixture locating points is determined by trial and error or the experience of the designer. To that end, a new design method based on grey theory is proposed to determine the number of sheet metal fixture locating points in this paper. Firstly, the training sample set is generated by Latin hypercube sampling (LHS and finite element analysis (FEA. Secondly, the GM(1, 1 grey model is constructed based on the established training sample set to approximate the mapping relationship between the number of fixture locating points and the concerned sheet metal maximum deformation. Thirdly, the final number of fixture locating points for sheet metal can be inversely calculated under the allowable maximum deformation. Finally, a sheet metal case is conducted and the results indicate that the proposed approach is effective and efficient in determining the number of fixture locating points for sheet metal.

  15. Advanced Daily Prediction Model for National Suicide Numbers with Social Media Data.

    Science.gov (United States)

    Lee, Kyung Sang; Lee, Hyewon; Myung, Woojae; Song, Gil-Young; Lee, Kihwang; Kim, Ho; Carroll, Bernard J; Kim, Doh Kwan

    2018-04-01

    Suicide is a significant public health concern worldwide. Social media data have a potential role in identifying high suicide risk individuals and also in predicting suicide rate at the population level. In this study, we report an advanced daily suicide prediction model using social media data combined with economic/meteorological variables along with observed suicide data lagged by 1 week. The social media data were drawn from weblog posts. We examined a total of 10,035 social media keywords for suicide prediction. We made predictions of national suicide numbers 7 days in advance daily for 2 years, based on a daily moving 5-year prediction modeling period. Our model predicted the likely range of daily national suicide numbers with 82.9% accuracy. Among the social media variables, words denoting economic issues and mood status showed high predictive strength. Observed number of suicides one week previously, recent celebrity suicide, and day of week followed by stock index, consumer price index, and sunlight duration 7 days before the target date were notable predictors along with the social media variables. These results strengthen the case for social media data to supplement classical social/economic/climatic data in forecasting national suicide events.

  16. Evaluation of constitutive models for crushed salt

    International Nuclear Information System (INIS)

    Callahan, G.D.; Loken, M.C.; Hurtado, L.D.; Hansen, F.D.

    1996-01-01

    Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions

  17. A Simulation Based Analysis of Motor Unit Number Index (MUNIX) Technique Using Motoneuron Pool and Surface Electromyogram Models

    Science.gov (United States)

    Li, Xiaoyan; Rymer, William Zev; Zhou, Ping

    2013-01-01

    Motor unit number index (MUNIX) measurement has recently achieved increasing attention as a tool to evaluate the progression of motoneuron diseases. In our current study, the sensitivity of the MUNIX technique to changes in motoneuron and muscle properties was explored by a simulation approach utilizing variations on published motoneuron pool and surface electromyogram (EMG) models. Our simulation results indicate that, when keeping motoneuron pool and muscle parameters unchanged and varying the input motor unit numbers to the model, then MUNIX estimates can appropriately characterize changes in motor unit numbers. Such MUNIX estimates are not sensitive to different motor unit recruitment and rate coding strategies used in the model. Furthermore, alterations in motor unit control properties do not have a significant effect on the MUNIX estimates. Neither adjustment of the motor unit recruitment range nor reduction of the motor unit firing rates jeopardizes the MUNIX estimates. The MUNIX estimates closely correlate with the maximum M wave amplitude. However, if we reduce the amplitude of each motor unit action potential rather than simply reduce motor unit number, then MUNIX estimates substantially underestimate the motor unit numbers in the muscle. These findings suggest that the current MUNIX definition is most suitable for motoneuron diseases that demonstrate secondary evidence of muscle fiber reinnervation. In this regard, when MUNIX is applied, it is of much importance to examine a parallel measurement of motor unit size index (MUSIX), defined as the ratio of the maximum M wave amplitude to the MUNIX. However, there are potential limitations in the application of the MUNIX methods in atrophied muscle, where it is unclear whether the atrophy is accompanied by loss of motor units or loss of muscle fiber size. PMID:22514208

  18. Competencies evaluation based on single valued neutrosophic numbers and decision analysis schema

    Directory of Open Access Journals (Sweden)

    Evelyn Jazmín Henríquez Antepara

    2017-09-01

    Full Text Available Recently, neutrosophic sets and its application to decision making have become a topic of significant importance for researchers and practitioners. The present work addresses one of the most complex aspects of the formative process based on competencies: evaluation. In this paper, a new method for competencies evaluation is developed in a multicriteria framework.

  19. Modeling for Green Supply Chain Evaluation

    Directory of Open Access Journals (Sweden)

    Elham Falatoonitoosi

    2013-01-01

    Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.

  20. Determination model for cetane number of biodiesel at different fatty acid composition: a review

    Directory of Open Access Journals (Sweden)

    Michal Angelovič

    2014-05-01

    Full Text Available The most accepted definition of biodiesel is stated at the EU technical regulation EN 14214 (2008 or in the USA in ASTM 6751-02. As a result of this highly strict description only methyl esters of fatty acids conform to these definitions, nevertheless the term ‘‘biodiesel’’ is spread out to other alkyl fatty esters. Some countries have adopted bioethanol for replacement of methanol in biodiesel transesterification and thus assuring a fully biological fuel. Of course, such position brings some problems in fulfilling technical requirements of EN 14214 or ASTM 6751-02. Biodiesel is actually a less complex mixture than petrodiesel, but different feedstock origins and the effect of seasonality may impose difficulties in fuel quality control. Since biodiesel is an alternative diesel fuel derived from the transesterification of triacylglycerol comprised materials, such as vegetable oils or animal fats, with simple alcohols to furnish the corresponding mono-alkyl esters, its composition depends on the raw material used, the cultivated area location, and harvest time. The choice of the raw material is usually the most important factor for fluctuations of biodiesel composition, because different vegetable oils and animal fats may contain different types of fatty acids. Important properties of this fuel vary significantly with the composition of the mixture. Cetane number, melting point, degree of saturation, density, cloud point, pour point, viscosity, and nitrogen oxides exhaust emission (NOx, for instance, deserve to be mentioned. One of the most important fuel quality indicators is the cetane number; however its experimental determination may be an expensive and lengthy task. To weaken situation concerning biodiesel, the availability of data in the literature is also scarce. In such scenario, the use of reliable models to predict the cetane number or any other essential characteristic may be of great utility. We reviewed available literature to

  1. Conceptual modelling of human resource evaluation process

    Directory of Open Access Journals (Sweden)

    Negoiţă Doina Olivia

    2017-01-01

    Full Text Available Taking into account the highly diverse tasks which employees have to fulfil due to complex requirements of nowadays consumers, the human resource within an enterprise has become a strategic element for developing and exploiting products which meet the market expectations. Therefore, organizations encounter difficulties when approaching the human resource evaluation process. Hence, the aim of the current paper is to design a conceptual model of the aforementioned process, which allows the enterprises to develop a specific methodology. In order to design the conceptual model, Business Process Modelling instruments were employed - Adonis Community Edition Business Process Management Toolkit using the ADONIS BPMS Notation. The conceptual model was developed based on an in-depth secondary research regarding the human resource evaluation process. The proposed conceptual model represents a generic workflow (sequential and/ or simultaneously activities, which can be extended considering the enterprise’s needs regarding their requirements when conducting a human resource evaluation process. Enterprises can benefit from using software instruments for business process modelling as they enable process analysis and evaluation (predefined / specific queries and also model optimization (simulations.

  2. A website evaluation model by integration of previous evaluation models using a quantitative approach

    Directory of Open Access Journals (Sweden)

    Ali Moeini

    2015-01-01

    Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.

  3. Comparison of formula and number-right scoring in undergraduate medical training: a Rasch model analysis.

    Science.gov (United States)

    Cecilio-Fernandes, Dario; Medema, Harro; Collares, Carlos Fernando; Schuwirth, Lambert; Cohen-Schotanus, Janke; Tio, René A

    2017-11-09

    Progress testing is an assessment tool used to periodically assess all students at the end-of-curriculum level. Because students cannot know everything, it is important that they recognize their lack of knowledge. For that reason, the formula-scoring method has usually been used. However, where partial knowledge needs to be taken into account, the number-right scoring method is used. Research comparing both methods has yielded conflicting results. As far as we know, in all these studies, Classical Test Theory or Generalizability Theory was used to analyze the data. In contrast to these studies, we will explore the use of the Rasch model to compare both methods. A 2 × 2 crossover design was used in a study where 298 students from four medical schools participated. A sample of 200 previously used questions from the progress tests was selected. The data were analyzed using the Rasch model, which provides fit parameters, reliability coefficients, and response option analysis. The fit parameters were in the optimal interval ranging from 0.50 to 1.50, and the means were around 1.00. The person and item reliability coefficients were higher in the number-right condition than in the formula-scoring condition. The response option analysis showed that the majority of dysfunctional items emerged in the formula-scoring condition. The findings of this study support the use of number-right scoring over formula scoring. Rasch model analyses showed that tests with number-right scoring have better psychometric properties than formula scoring. However, choosing the appropriate scoring method should depend not only on psychometric properties but also on self-directed test-taking strategies and metacognitive skills.

  4. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  5. Evaluating Extensions to Coherent Mortality Forecasting Models

    Directory of Open Access Journals (Sweden)

    Syazreen Shair

    2017-03-01

    Full Text Available Coherent models were developed recently to forecast the mortality of two or more sub-populations simultaneously and to ensure long-term non-divergent mortality forecasts of sub-populations. This paper evaluates the forecast accuracy of two recently-published coherent mortality models, the Poisson common factor and the product-ratio functional models. These models are compared to each other and the corresponding independent models, as well as the original Lee–Carter model. All models are applied to age-gender-specific mortality data for Australia and Malaysia and age-gender-ethnicity-specific data for Malaysia. The out-of-sample forecast error of log death rates, male-to-female death rate ratios and life expectancy at birth from each model are compared and examined across groups. The results show that, in terms of overall accuracy, the forecasts of both coherent models are consistently more accurate than those of the independent models for Australia and for Malaysia, but the relative performance differs by forecast horizon. Although the product-ratio functional model outperforms the Poisson common factor model for Australia, the Poisson common factor is more accurate for Malaysia. For the ethnic groups application, ethnic-coherence gives better results than gender-coherence. The results provide evidence that coherent models are preferable to independent models for forecasting sub-populations’ mortality.

  6. Multi-criteria evaluation of hydrological models

    Science.gov (United States)

    Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko

    2013-04-01

    Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several

  7. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  8. Evaluation of R and D, volume 1 number 1 Fall 1992

    International Nuclear Information System (INIS)

    1992-01-01

    A newsletter on the evaluation of research and development in Canada. It is published every four months. This issue has information on a variety of topics including a new database for NSERC research grants available, national research and development expenditure targets, an assessment of Canada's biotechnology programs, the Manufacturing Research Corporation of Ontario assesses the research and development needs of industry plus a summary of the May 1992 Conference of the Canadian Evaluation Society

  9. Evaluation of R and D, volume 1 number 1 Fall 1992

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-12-31

    A newsletter on the evaluation of research and development in Canada. It is published every four months. This issue has information on a variety of topics including a new database for NSERC research grants available, national research and development expenditure targets, an assessment of Canada`s biotechnology programs, the Manufacturing Research Corporation of Ontario assesses the research and development needs of industry plus a summary of the May 1992 Conference of the Canadian Evaluation Society.

  10. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  11. Evaluation of Related Risk Factors in Number of Musculoskeletal Disorders Among Carpet Weavers in Iran

    Directory of Open Access Journals (Sweden)

    Nasim Karimi

    2016-12-01

    Conclusion: According to the results of this study, it can be concluded that occupational factors are associated with the number of MSDs developing among carpet weavers. Thus, using standard tools and decreasing hours of work per day can reduce frequency of MSDs among carpet weavers.

  12. A variable turbulent Prandtl and Schmidt number model study for scramjet applications

    Science.gov (United States)

    Keistler, Patrick

    A turbulence model that allows for the calculation of the variable turbulent Prandtl (Prt) and Schmidt (Sct) numbers as part of the solution is presented. The model also accounts for the interactions between turbulence and chemistry by modeling the corresponding terms. Four equations are added to the baseline k-zeta turbulence model: two equations for enthalpy variance and its dissipation rate to calculate the turbulent diffusivity, and two equations for the concentrations variance and its dissipation rate to calculate the turbulent diffusion coefficient. The underlying turbulence model already accounts for compressibility effects. The variable Prt /Sct turbulence model is validated and tuned by simulating a wide variety of experiments. Included in the experiments are two-dimensional, axisymmetric, and three-dimensional mixing and combustion cases. The combustion cases involved either hydrogen and air, or hydrogen, ethylene, and air. Two chemical kinetic models are employed for each of these situations. For the hydrogen and air cases, a seven species/seven reaction model where the reaction rates are temperature dependent and a nine species/nineteen reaction model where the reaction rates are dependent on both pressure and temperature are used. For the cases involving ethylene, a 15 species/44 reaction reduced model that is both pressure and temperature dependent is used, along with a 22 species/18 global reaction reduced model that makes use of the quasi-steady-state approximation. In general, fair to good agreement is indicated for all simulated experiments. The turbulence/chemistry interaction terms are found to have a significant impact on flame location for the two-dimensional combustion case, with excellent experimental agreement when the terms are included. In most cases, the hydrogen chemical mechanisms behave nearly identically, but for one case, the pressure dependent model would not auto-ignite at the same conditions as the experiment and the other

  13. Saphire models and software for ASP evaluations

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission's (NRC's) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented

  14. Determination of critical nucleation number for a single nucleation amyloid-β aggregation model.

    Science.gov (United States)

    Ghosh, Preetam; Vaidya, Ashwin; Kumar, Amit; Rangachari, Vijayaraghavan

    2016-03-01

    Aggregates of amyloid-β (Aβ) peptide are known to be the key pathological agents in Alzheimer disease (AD). Aβ aggregates to form large, insoluble fibrils that deposit as senile plaques in AD brains. The process of aggregation is nucleation-dependent in which the formation of a nucleus is the rate-limiting step, and controls the physiochemical fate of the aggregates formed. Therefore, understanding the properties of nucleus and pre-nucleation events will be significant in reducing the existing knowledge-gap in AD pathogenesis. In this report, we have determined the plausible range of critical nucleation number (n(*)), the number of monomers associated within the nucleus for a homogenous aggregation model with single unique nucleation event, by two independent methods: A reduced-order stability analysis and ordinary differential equation based numerical analysis, supported by experimental biophysics. The results establish that the most likely range of n(*) is between 7 and 14 and within, this range, n(*) = 12 closely supports the experimental data. These numbers are in agreement with those previously reported, and importantly, the report establishes a new modeling framework using two independent approaches towards a convergent solution in modeling complex aggregation reactions. Our model also suggests that the formation of large protofibrils is dependent on the nature of n(*), further supporting the idea that pre-nucleation events are significant in controlling the fate of larger aggregates formed. This report has re-opened an old problem with a new perspective and holds promise towards revealing the molecular events in amyloid pathologies in the future. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Modeling Energy and Development : An Evaluation of Models and Concepts

    NARCIS (Netherlands)

    Ruijven, Bas van; Urban, Frauke; Benders, René M.J.; Moll, Henri C.; Sluijs, Jeroen P. van der; Vries, Bert de; Vuuren, Detlef P. van

    2008-01-01

    Most global energy models are developed by institutes from developed countries focusing primarily oil issues that are important in industrialized countries. Evaluation of the results for Asia of the IPCC/SRES models shows that broad concepts of energy and development. the energy ladder and the

  16. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  17. Evaluation in the resonance range of nuclei with a mass number above 220

    International Nuclear Information System (INIS)

    Ribon, P.

    1970-01-01

    The author discusses the problems posed by the evaluation of neutron data for fissile or fertile nuclei in the range of resolved or unresolved resonances. It appears to take several years until the data of an experiment are used by the reactor physicists. If one wants to have recent data at one's disposal, one cannot have recourse to evaluated-data libraries. Moreover, the existing parameter sets are only fragmentary. A new evaluation is, therefore, necessary for nearly all of these nuclei, but it cannot be based upon different parameter sets; these are indeed contradictory, and the evaluator will have to go back to the original data. The author shows for the set of σ f of 235 U, that a careful comparison of the data shows up unsuspected local defects. Some examples illustrate the deviation between analyses carried out by different methods and between the results on the established divergences. The parameters or cross-sections are far from being known with the precision one would desire. This fact gives rise to anomalies in the interpretation of data necessary for understanding and simulation in the range of unresolved resonances. But the introduction of concepts connected with sub-threshold fission noticeably furthers this understanding. Therefore a comparison of the methods of analysis must be made in more and more accurate measurements (evaluation and correction of systematic errors). (author) [fr

  18. Coordination number constraint models for hydrogenated amorphous Si deposited by catalytic chemical vapour deposition

    Science.gov (United States)

    Kawahara, Toshio; Tabuchi, Norikazu; Arai, Takashi; Sato, Yoshikazu; Morimoto, Jun; Matsumura, Hideki

    2005-02-01

    We measured structure factors of hydrogenated amorphous Si by x-ray diffraction and analysed the obtained structures using a reverse Monte Carlo (RMC) technique. A small shoulder in the measured structure factor S(Q) was observed on the larger Q side of the first peak. The RMC results with an unconstrained model did not clearly show the small shoulder. Adding constraints for coordination numbers 2 and 3, the small shoulder was reproduced and the agreement with the experimental data became better. The ratio of the constrained coordination numbers was consistent with the ratio of Si-H and Si-H2 bonds which was estimated by the Fourier transformed infrared spectra of the same sample. This shoulder and the oscillation of the corresponding pair distribution function g(r) at large r seem to be related to the low randomness of cat-CVD deposited a-Si:H.

  19. Coordination number constraint models for hydrogenated amorphous Si deposited by catalytic chemical vapour deposition

    International Nuclear Information System (INIS)

    Kawahara, Toshio; Tabuchi, Norikazu; Arai, Takashi; Sato, Yoshikazu; Morimoto, Jun; Matsumura, Hideki

    2005-01-01

    We measured structure factors of hydrogenated amorphous Si by x-ray diffraction and analysed the obtained structures using a reverse Monte Carlo (RMC) technique. A small shoulder in the measured structure factor S(Q) was observed on the larger Q side of the first peak. The RMC results with an unconstrained model did not clearly show the small shoulder. Adding constraints for coordination numbers 2 and 3, the small shoulder was reproduced and the agreement with the experimental data became better. The ratio of the constrained coordination numbers was consistent with the ratio of Si-H and Si-H 2 bonds which was estimated by the Fourier transformed infrared spectra of the same sample. This shoulder and the oscillation of the corresponding pair distribution function g(r) at large r seem to be related to the low randomness of cat-CVD deposited a-Si:H

  20. An Architecturally Constrained Model of Random Number Generation and its Application to Modelling the Effect of Generation Rate

    Directory of Open Access Journals (Sweden)

    Nicholas J. Sexton

    2014-07-01

    Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.

  1. Investigation of the influence of turbulence models on the prediction of heat transfer to low Prandtl number fluids

    International Nuclear Information System (INIS)

    Thiele, R.; Ma, W.; Anglart, H.

    2011-01-01

    Despite many advances in computational fluid dynamics (CFD), heat transfer modeling and validation of code for liquid metal flows needs to be improved. This contribution aims to provide validation of several turbulence models implemented in OpenFOAM. 6 different low Reynolds number and 3 high Reynolds number turbulence models have been validated against experimental data for 3 different Reynolds numbers. The results show that most models are able to predict the temperature profile tendencies and that especially the k-ω-SST by Menter has good predictive capabilities. However, all turbulence models show deteriorating capabilities with decreasing Reynolds numbers. (author)

  2. Novel methods for evaluation of the Reynolds number of synthetic jets

    Czech Academy of Sciences Publication Activity Database

    Kordík, Jozef; Broučková, Zuzana; Vít, T.; Pavelka, Miroslav; Trávníček, Zdeněk

    2014-01-01

    Roč. 55, č. 6 (2014), 1757_1-1757_16 ISSN 0723-4864 R&D Projects: GA ČR GPP101/12/P556 Institutional support: RVO:61388998 Keywords : synthetic jet * synthetic jet actuator * Reynolds number Subject RIV: BK - Fluid Dynamics Impact factor: 1.670, year: 2014 http://link.springer.com/article/10.1007%2Fs00348-014-1757-x

  3. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  4. CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE

    Directory of Open Access Journals (Sweden)

    José Luis Bernal Agudo

    2015-06-01

    Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.

  5. Study on team evaluation. Team process model for team evaluation

    International Nuclear Information System (INIS)

    Sasou Kunihide; Ebisu, Mitsuhiro; Hirose, Ayako

    2004-01-01

    Several studies have been done to evaluate or improve team performance in nuclear and aviation industries. Crew resource management is the typical example. In addition, team evaluation recently gathers interests in other teams of lawyers, medical staff, accountants, psychiatrics, executive, etc. However, the most evaluation methods focus on the results of team behavior that can be observed through training or actual business situations. What is expected team is not only resolving problems but also training younger members being destined to lead the next generation. Therefore, the authors set the final goal of this study establishing a series of methods to evaluate and improve teams inclusively such as decision making, motivation, staffing, etc. As the first step, this study develops team process model describing viewpoints for the evaluation. The team process is defined as some kinds of power that activate or inactivate competency of individuals that is the components of team's competency. To find the team process, the authors discussed the merits of team behavior with the experienced training instructors and shift supervisors of nuclear/thermal power plants. The discussion finds four team merits and many components to realize those team merits. Classifying those components into eight groups of team processes such as 'Orientation', 'Decision Making', 'Power and Responsibility', 'Workload Management', 'Professional Trust', 'Motivation', 'Training' and 'staffing', the authors propose Team Process Model with two to four sub processes in each team process. In the future, the authors will develop methods to evaluate some of the team processes for nuclear/thermal power plant operation teams. (author)

  6. Sunflower petals: Some physical properties and modeling distribution of their number, dimensions, and mass

    Directory of Open Access Journals (Sweden)

    Amir Hossein Mirzabe

    2018-06-01

    Full Text Available Sunflower petal is one of the parts of the sunflower which has drawn attention and has several applications these days. These applications justify getting information about physical properties, mechanical properties, drying trends, etc. in order to design new machines and use new methods to harvest or dry the sunflower petals. For three varieties of sunflower, picking force of petals was measured; number of petals of each head was counted; unit mass and 1000-unit mass of fresh petals were measured and length, width, and projected area of fresh petals were calculated based on image processing technique; frequency distributions of these parameters were modeled using statistical distribution models namely Gamma, Generalized Extreme Value (G. E. V, Lognormal, and Weibull. Results of picking force showed that with increasing number of days after appearing the first petal on each head from 5 to 14 and decreasing loading rate from 150 g min−1 to 50 g min−1 values of picking force were decreased for three varieties, but diameter of sunflower head had different effects on picking force for each variety. Length, width, and number of petals of Dorsefid variety ranged from 38.52 to 95.44 mm, 3.80 to 9.28 mm and 29 to 89, respectively. The corresponding values ranged from 34.19 to 88.18 mm, 4.28 to 10.60 mm and 21 to 89, respectively for Shamshiri variety and ranged from 44.47 to 114.63 mm, 7.03 to 20.31 mm and 29 to 89 for Sirena variety. Results of frequency distribution modeling indicated that in most cases, G. E. V and Weibull distributions had better performance than other distributions. Keywords: Sunflower (Helianthus annus L. petal, Picking force, Image processing, Fibonacci sequence, Lucas sequence

  7. A number-projected model with generalized pairing interaction in application to rotating nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Satula, W. [Warsaw Univ. (Poland)]|[Joint Institute for Heavy Ion Research, Oak Ridge, TN (United States)]|[Univ. of Tennessee, Knoxville, TN (United States)]|[Royal Institute of Technology, Stockholm (Sweden); Wyss, R. [Royal Institute of Technology, Stockholm (Sweden)

    1996-12-31

    A cranked mean-field model that takes into account both T=1 and T=0 pairing interactions is presented. The like-particle pairing interaction is described by means of a standard seniority force. The neutron-proton channel includes simultaneously correlations among particles moving in time reversed orbits (T=1) and identical orbits (T=0). The coupling between different pairing channels and nuclear rotation is taken into account selfconsistently. Approximate number-projection is included by means of the Lipkin-Nogami method. The transitions between different pairing phases are discussed as a function of neutron/proton excess, T{sub z}, and rotational frequency, {Dirac_h}{omega}.

  8. A supersymmetric matrix model: II. Exploring higher-fermion-number sectors

    CERN Document Server

    Veneziano, Gabriele

    2006-01-01

    Continuing our previous analysis of a supersymmetric quantum-mechanical matrix model, we study in detail the properties of its sectors with fermion number F=2 and 3. We confirm all previous expectations, modulo the appearance, at strong coupling, of {\\it two} new bosonic ground states causing a further jump in Witten's index across a previously identified critical 't Hooft coupling $\\lambda_c$. We are able to elucidate the origin of these new SUSY vacua by considering the $\\lambda \\to \\infty$ limit and a strong coupling expansion around it.

  9. Periodic matrix population models: growth rate, basic reproduction number, and entropy.

    Science.gov (United States)

    Bacaër, Nicolas

    2009-10-01

    This article considers three different aspects of periodic matrix population models. First, a formula for the sensitivity analysis of the growth rate lambda is obtained that is simpler than the one obtained by Caswell and Trevisan. Secondly, the formula for the basic reproduction number R0 in a constant environment is generalized to the case of a periodic environment. Some inequalities between lambda and R0 proved by Cushing and Zhou are also generalized to the periodic case. Finally, we add some remarks on Demetrius' notion of evolutionary entropy H and its relationship to the growth rate lambda in the periodic case.

  10. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  11. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    Science.gov (United States)

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  12. Unified theory to evaluate the effect of concentration difference and Peclet number on electroosmotic mobility error of micro electroosmotic flow

    KAUST Repository

    Wang, Wentao

    2012-03-01

    Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.

  13. An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengwang Ye

    2017-01-01

    Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.

  14. Evaluating the relationship between dental caries number and salivary level of IgA in adults

    Science.gov (United States)

    Haeri-Araghi, Hesam; Safarzadeh-Khosroshahi, Shadab; Mirzadeh, Monirsadat

    2018-01-01

    Background Dental caries are the most common mouth infectious disease and also chronic disease of childhood. Saliva plays different roles in oral cavity; for example, salivary immunoglobulins play significant role in body and oral immunity. Various studies were conducted on the different effects of IgA on oral cavity, especially dental caries, and reported controversial results. The current study aimed to compare salivary IgA level at different stages of dental caries in adults. Material and Methods A total of 40 adults, aged 20 to 40 years, referred to the department of oral medicine at Qazvin Faculty of Dentistry, were selected voluntarily based on the number of decayed teeth. Their unstimulated saliva was collected by the spitting method. The cases were assigned to 4 groups each of 10, based on the number of decayed teeth, as follows: Group 1: Caries free, Group 2: With 1 or 2 decayed teeth, Group 3: With 3 or 4 decayed teeth, and Group 4: With 5 or more decayed teeth. None of the cases had systemic diseases or the history of using medicines which affect the quality or quantity of saliva. The salivary IgA level of the cases was measured immunoturbidometrically and analyzed by ANOVA and t test. Results Significant difference was observed between the groups 1 and 4, but there was no significant difference between the other groups. Conclusions According to the results of the current study, the salivary IgA can be considered as an index for the function of immune system, which may be increased by the number of decayed teeth. In fact, the increase of salivary IgA is just the response of immune system to the accumulation of microorganisms and may be the attempt of body to control them. Key words:Saliva, IgA, Dental caries. PMID:29670718

  15. Evaluation of R and D volume 2 number 3; Evaluation de la R-D. Volume 2, Numero 3

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, F; Cheah, C; Dalpe, R; O` Brecht, M [eds.

    1994-12-31

    A Canadian newsletter on the evaluation of research and development. This issue contains an econometric assessment of the impact of Research and Development programs, the choosing of the location of pharmaceutical Research and Development, the industry`s scientific publications, the standards as a strategic instrument, and how much future Research and Development can an organization justify.

  16. Evaluation of Workflow Management Systems - A Meta Model Approach

    Directory of Open Access Journals (Sweden)

    Michael Rosemann

    1998-11-01

    Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.

  17. Evaluating the AS-level Internet models: beyond topological characteristics

    International Nuclear Information System (INIS)

    Fan Zheng-Ping

    2012-01-01

    A surge number of models has been proposed to model the Internet in the past decades. However, the issue on which models are better to model the Internet has still remained a problem. By analysing the evolving dynamics of the Internet, we suggest that at the autonomous system (AS) level, a suitable Internet model, should at least be heterogeneous and have a linearly growing mechanism. More importantly, we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective. Also, we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness, which is different from the previous observations. Our analytic results are helpful not only for the Internet, but also for other general complex networks. (interdisciplinary physics and related areas of science and technology)

  18. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  19. Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model

    International Nuclear Information System (INIS)

    Manderla, M; Koutnik, J; Kiniger, K

    2014-01-01

    Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance

  20. Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model

    Science.gov (United States)

    Manderla, M.; Kiniger, K.; Koutnik, J.

    2014-03-01

    Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.

  1. [IMSS in numbers. Evaluation of the performance of health institutions in Mexico, 2004].

    Science.gov (United States)

    2006-01-01

    The evaluation of health institutions performance in Mexico during 2004 was done using 29 indicators that describe intra-hospital mortality rates, productivity of health services, availability of health resources, quality of care, security, investment and costs of health care and the satisfaction level by users of health services. This exercise describes the efficiency and organization of health services provided by the different health institutions and allows comparing and balancing the performance of each institution. Results indicate the differences in availability of resources, inequity in the financing health care services, and inefficiency in the use of resources but also describe the level of efficacy of certain institutions and the satisfaction level that different users have of health services. The evaluation of the performance of the entire health institutions should provide the means to improve all the process of health care and to increase the quality of care in all health institutions in the country.

  2. Retrieving infinite numbers of patterns in a spin-glass model of immune networks

    Science.gov (United States)

    Agliari, E.; Annibale, A.; Barra, A.; Coolen, A. C. C.; Tantari, D.

    2017-01-01

    The similarity between neural and (adaptive) immune networks has been known for decades, but so far we did not understand the mechanism that allows the immune system, unlike associative neural networks, to recall and execute a large number of memorized defense strategies in parallel. The explanation turns out to lie in the network topology. Neurons interact typically with a large number of other neurons, whereas interactions among lymphocytes in immune networks are very specific, and described by graphs with finite connectivity. In this paper we use replica techniques to solve a statistical mechanical immune network model with “coordinator branches” (T-cells) and “effector branches” (B-cells), and show how the finite connectivity enables the coordinators to manage an extensive number of effectors simultaneously, even above the percolation threshold (where clonal cross-talk is not negligible). A consequence of its underlying topological sparsity is that the adaptive immune system exhibits only weak ergodicity breaking, so that also spontaneous switch-like effects as bi-stabilities are present: the latter may play a significant role in the maintenance of immune homeostasis.

  3. Mass number dependence of total neutron cross section; a discussion based on the semi-classical optical model

    International Nuclear Information System (INIS)

    Angeli, Istvan

    1990-01-01

    The dependence of total neutron cross section on mass number can be calculated by the black nucleus formula, according to the optical model. The fine structure of mass number dependence is studied, and a correction factor formula is given on the basis of a semi-classical optical model. Yielding results in good agreement with experimental data. In addition to the mass number dependence, the neutron-energy dependence can also be calculated using this model. (K.A.)

  4. Evaluating Performances of Traffic Noise Models | Oyedepo ...

    African Journals Online (AJOL)

    Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...

  5. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  6. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  7. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  8. Credit Risk Evaluation : Modeling - Analysis - Management

    OpenAIRE

    Wehrspohn, Uwe

    2002-01-01

    An analysis and further development of the building blocks of modern credit risk management: -Definitions of default -Estimation of default probabilities -Exposures -Recovery Rates -Pricing -Concepts of portfolio dependence -Time horizons for risk calculations -Quantification of portfolio risk -Estimation of risk measures -Portfolio analysis and portfolio improvement -Evaluation and comparison of credit risk models -Analytic portfolio loss distributions The thesis contributes to the evaluatio...

  9. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  10. Decimal representations are not distinct from natural number representations – Evidence from a combined eye-tracking and computational modelling approach

    Directory of Open Access Journals (Sweden)

    Stefan eHuber

    2014-04-01

    Full Text Available Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel.To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.

  11. Number of Children and Telomere Length in Women: A Prospective, Longitudinal Evaluation

    Science.gov (United States)

    Barha, Cindy K.; Hanna, Courtney W.; Salvante, Katrina G.; Wilson, Samantha L.; Robinson, Wendy P.; Altman, Rachel M.; Nepomnaschy, Pablo A.

    2016-01-01

    Life history theory (LHT) predicts a trade-off between reproductive effort and the pace of biological aging. Energy invested in reproduction is not available for tissue maintenance, thus having more offspring is expected to lead to accelerated senescence. Studies conducted in a variety of non-human species are consistent with this LHT prediction. Here we investigate the relationship between the number of surviving children born to a woman and telomere length (TL, a marker of cellular aging) over 13 years in a group of 75 Kaqchikel Mayan women. Contrary to LHT’s prediction, women who had fewer children exhibited shorter TLs than those who had more children (p = 0.045) after controlling for TL at the onset of the 13-year study period. An “ultimate” explanation for this apparently protective effect of having more children may lay with human’s cooperative-breeding strategy. In a number of socio-economic and cultural contexts, having more chilren appears to be linked to an increase in social support for mothers (e.g., allomaternal care). Higher social support, has been argued to reduce the costs of further reproduction. Lower reproductive costs may make more metabolic energy available for tissue maintenance, resulting in a slower pace of cellular aging. At a “proximate” level, mechanisms involved may include the actions of the gonadal steroid estradiol, which increases dramatically during pregnancy. Estradiol is known to protect TL from the effects of oxidative stress as well as increase telomerase activity, an enzyme that maintains TL. Future research should explore the potential role of social support as well as that of estradiol and other potential biological pathways in the trade-offs between reproductive effort and the pace of cellular aging within and among human as well as in non-human populations. PMID:26731744

  12. Sheep numbers required for dry matter digestibility evaluations when fed fresh perennial ryegrass or forage rape.

    Science.gov (United States)

    Sun, Xuezhao; Krijgsman, Linda; Waghorn, Garry C; Kjestrup, Holly; Koolaard, John; Pacheco, David

    2017-03-01

    Research trials with fresh forages often require accurate and precise measurement of digestibility and variation in digestion between individuals, and the duration of measurement periods needs to be established to ensure reliable data are obtained. The variation is likely to be greater when freshly harvested feeds are given, such as perennial ryegrass ( Lolium perenne L.) and forage rape ( Brassica napus L.), because the nutrient composition changes over time and in response to weather conditions. Daily feed intake and faeces output data from a digestibility trial with these forages were used to calculate the effects of differing lengths of the measurement period and differing numbers of sheep, on the precision of digestibility, with a view towards development of a protocol. Sixteen lambs aged 8 months and weighing 33 kg at the commencement of the trial were fed either perennial ryegrass or forage rape (8/treatment group) over 2 periods with 35 d between measurements. They had been acclimatised to the diets, having grazed them for 42 d prior to 11 days of indoor measurements. The sheep numbers required for a digestibility trial with different combinations of acclimatisation and measurement period lengths were subsequently calculated for 3 levels of imposed precision upon the estimate of mean dry matter (DM) digestibility. It is recommended that if the standard error of the mean for digestibility is equal to or higher than 5 g/kg DM, and if sheep are already used to a fresh perennial ryegrass or forage rape diet, then a minimum of 6 animals are needed and 4 acclimatisation days being fed individually in metabolic crates followed by 7 days of measurement.

  13. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  14. A multi-model assessment of the impact of sea spray geoengineering on cloud droplet number

    Directory of Open Access Journals (Sweden)

    K. J. Pringle

    2012-12-01

    Full Text Available Artificially increasing the albedo of marine boundary layer clouds by the mechanical emission of sea spray aerosol has been proposed as a geoengineering technique to slow the warming caused by anthropogenic greenhouse gases. A previous global model study (Korhonen et al., 2010 found that only modest increases (< 20% and sometimes even decreases in cloud drop number (CDN concentrations would result from emission scenarios calculated using a windspeed dependent geoengineering flux parameterisation. Here we extend that work to examine the conditions under which decreases in CDN can occur, and use three independent global models to quantify maximum achievable CDN changes. We find that decreases in CDN can occur when at least three of the following conditions are met: the injected particle number is < 100 cm−3, the injected diameter is > 250–300 nm, the background aerosol loading is large (≥ 150 cm−3 and the in-cloud updraught velocity is low (< 0.2 m s−1. With lower background loadings and/or increased updraught velocity, significant increases in CDN can be achieved. None of the global models predict a decrease in CDN as a result of geoengineering, although there is considerable diversity in the calculated efficiency of geoengineering, which arises from the diversity in the simulated marine aerosol distributions. All three models show a small dependence of geoengineering efficiency on the injected particle size and the geometric standard deviation of the injected mode. However, the achievability of significant cloud drop enhancements is strongly dependent on the cloud updraught speed. With an updraught speed of 0.1 m s−1 a global mean CDN of 375 cm−3 (previously estimated to cancel the forcing caused by CO2 doubling is achievable in only about 50% of grid boxes which have > 50% cloud cover, irrespective of the amount of aerosol injected. But at stronger updraft speeds (0

  15. Random number generation

    International Nuclear Information System (INIS)

    Coveyou, R.R.

    1974-01-01

    The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

  16. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  17. Office for Analysis and Evaluation of Operational Data 1993 annual report: Volume 8, Number 1

    International Nuclear Information System (INIS)

    1994-11-01

    This annual report of the US Nuclear Regulatory Commission's Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1993. The report is published in two parts. NUREG-1272, Vol. 8, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about the trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports, diagnostic evaluations, and reports to the NRC's Operations Center. NUREG-1272, Vol. 8, No. 2, covers nuclear materials and presents a review of the events and concerns during 1993 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from 1980 through 1993

  18. External beam radiotherapy of localized prostatic adenocarcinoma. Evaluation of conformal therapy, field number and target margins

    International Nuclear Information System (INIS)

    Lennernaes, B.; Rikner, G.; Letocha, H.; Nilsson, S.

    1995-01-01

    The purpose of the present study was to identify factors of importance in the planning of external beam radiotherapy of prostatic adenocarcinoma. Seven patients with urogenital cancers were planned for external radiotherapy of the prostate. Four different techniques were used, viz. a 4-field box technique and four-, five- or six-field conformal therapy set-ups combined with three different margins (1-3 cm). The evaluations were based on the doses delivered to the rectum and the urinary bladder. A normal tissue complication probability (NTCP) was calculated for each plan using Lyman's dose volume reduction method. The most important factors that resulted in a decrease of the dose delivered to the rectum and the bladder were the use of conformal therapy and smaller margins. Conformal therapy seemed more important for the dose distribution in the urinary bladder. Five- and six-field set-ups were not significantly better than those with four fields. NTCP calculations were in accordance with the evaluation of the dose volume histograms. To conclude, four-field conformal therapy utilizing reduced margins improves the dose distribution to the rectum and the urinary bladder in the radiotherapy of prostatic adenocarcinoma. (orig.)

  19. Spatially explicit models, generalized reproduction numbers and the prediction of patterns of waterborne disease

    Science.gov (United States)

    Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.

    2012-12-01

    still lacking. Here, we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0 explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.

  20. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  1. COMPLEX EVALUATION OF THE NUMBER DYNAMICS OF COLONIAL WATERBIRD COMMUNITIES (THE CASE OF SOME ISLANDS OF SIVASH REGION

    Directory of Open Access Journals (Sweden)

    Matsyura A.V.

    2011-12-01

    Full Text Available The problem of the mathematical analysis of the number dynamics of the nesting waterbirds for the islands of the south of Ukraine is examined. The algorithm of the evaluation of changes in the number of island birds is proposed. Data of the long-term monitoring of the number of birds were analyzed according to this algorithm. The necessity of the implementation of the statistical indices together with the graphic representation of island birds’ turnover is proved. The trends of population dynamics are determined for the key species. The discussed procedure of the complex evaluation is proposed for the management planning of the island bird species and their habitats. The performed analysis of the number dynamics of the key-stone breeding island birds showed that, with the exception of little tern, the population status and the prognosis of number are sufficiently favorable. From the data of long-term monitoring we came up with the conclusion about the existence of island habitats with carrying capacity to maintain the additional number of breeding birds. In the case of unfavorable conditions like strengthening of anthropogenic press, concurrent interrelations, deficiency of feed resources or drastic reduction of breeding biotopes, the birds due to turnover are capable to successfully react even without reducing their number and breeding success. The extinction rate of the breeding bird species from the island sites directly correlates with the number of breeding species. For the species with equal abundance, the extinction probability is higher for birds, whose numbers are unstable and characterized by significant fluctuations. This testifies the urgency of the constant monitoring and analysis of the number dynamics of breeding bird species in region. The suggested procedure of analysis is recommended for drawing up of management plans and performing of prognoses of number of breeding island bird species. More detail analysis with use of

  2. Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Luca Chiaraviglio

    2016-06-01

    Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.

  3. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  4. Review the number of accidents in Tehran over a two-year period and prediction of the number of events based on a time-series model

    Science.gov (United States)

    Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar

    2013-01-01

    Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405

  5. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  6. Metrics and Evaluation Models for Accessible Television

    DEFF Research Database (Denmark)

    Li, Dongxiao; Looms, Peter Olaf

    2014-01-01

    The adoption of the UN Convention on the Rights of Persons with Disabilities (UN CRPD) in 2006 has provided a global framework for work on accessibility, including information and communication technologies and audiovisual content. One of the challenges facing the application of the UN CRPD...... number of platforms on which audiovisual content needs to be distributed, requiring very clear multiplatform architectures to facilitate interworking and assure interoperability. As a consequence, the regular evaluations of progress being made by signatories to the UN CRPD protocol are difficult...

  7. Office for Analysis and Evaluation of Operational Data 1996 annual report. Volume 10, Number 1: Reactors

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    This annual report of the US Nuclear Regulatory Commission`s Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1996. The report is published in three parts. NUREG-1272, Vol. 10, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports and reports to the NRC`s Operations Center. NUREG-1272, Vol. 10, No. 2, covers nuclear materials and presents a review of the events and concerns during 1996 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from CY 1980 through 1996. NUREG-1272, Vol. 10, No. 3, covers technical training and presents the activities of the Technical Training Center in support of the NRC`s mission in 1996.

  8. Office for Analysis and Evaluation of Operational Data 1996 annual report. Volume 10, Number 1: Reactors

    International Nuclear Information System (INIS)

    1997-12-01

    This annual report of the US Nuclear Regulatory Commission's Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1996. The report is published in three parts. NUREG-1272, Vol. 10, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports and reports to the NRC's Operations Center. NUREG-1272, Vol. 10, No. 2, covers nuclear materials and presents a review of the events and concerns during 1996 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from CY 1980 through 1996. NUREG-1272, Vol. 10, No. 3, covers technical training and presents the activities of the Technical Training Center in support of the NRC's mission in 1996

  9. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  10. CTBT integrated verification system evaluation model supplement

    International Nuclear Information System (INIS)

    EDENBURN, MICHAEL W.; BUNTING, MARCUS; PAYNE, ARTHUR C. JR.; TROST, LAWRENCE C.

    2000-01-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0

  11. Evaluating spatial patterns in hydrological modelling

    DEFF Research Database (Denmark)

    Koch, Julian

    the contiguous United Sates (10^6 km2). To this end, the thesis at hand applies a set of spatial performance metrics on various hydrological variables, namely land-surface-temperature (LST), evapotranspiration (ET) and soil moisture. The inspiration for the applied metrics is found in related fields...... is not fully exploited by current modelling frameworks due to the lack of suitable spatial performance metrics. Furthermore, the traditional model evaluation using discharge is found unsuitable to lay confidence on the predicted catchment inherent spatial variability of hydrological processes in a fully...

  12. Evaluation of Model Wheat/Hemp Composites

    Directory of Open Access Journals (Sweden)

    Ivan Švec

    2014-02-01

    Full Text Available Model cereal blends were prepared from commercial wheat fine flour and 5 samples of hemp flour (HF, including fine (2 of conventional form, 1 of organic form and wholemeal type (2 of conventional form. Wheat flour was substituted in 4 levels (5, 10, 15, 20%. HF addition has increased protein content independently on tested hemp flour form or type. Partial model cereal blends could be distinguished according to protein quality (Zeleny test values, especially between fine and wholemeal HF type. Both flour types affected also amylolytic activity, for which a relationship between hemp addition and determined level of Falling Number was confirmed for all five model cereal blends. Solvent retention capacity profiles (SRC of partial models were influenced by both HF form and type, as well as by its addition level. Between both mentioned groups of quality features, significant correlation were proved - relationships among protein content/quality and lactic acid SRC were verifiable on p <0.01 (-0.58, 0.91, respectively. By performed ANOVA, a possibility to distinguish the HF form used in model cereal blend according to the lactic acid SRC and the water SRC was demonstrated. Comparing partial cereal models containing fine and wholemeal hemp type, HF addition level demonstrated its impact on the sodium carbonate SRC and the water acid SRC. Normal 0 21 false false false CS JA X-NONE

  13. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-01-01

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  14. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    Science.gov (United States)

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  15. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  16. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  17. Transport properties site descriptive model. Guidelines for evaluation and modelling

    International Nuclear Information System (INIS)

    Berglund, Sten; Selroos, Jan-Olof

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of

  18. Variance of the number of tumors in a model for the induction of osteosarcoma by alpha radiation

    International Nuclear Information System (INIS)

    Groer, P.G.; Marshall, J.H.

    1976-01-01

    An earlier report on a model for the induction of osteosarcoma by alpha radiation gave differential equations for the mean numbers of normal, transformed, and malignant cells. In this report we show that for a constant dose rate the variance of the number of cells at each stage and time is equal to the corresponding mean, so the numbers of tumors predicted by the model have a Poisson distribution about their mean values

  19. Evaluation of CNN as anthropomorphic model observer

    Science.gov (United States)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  20. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies Evaluation Number 16. Supplement to Evaluation 15: Update of Key Reactions

    Science.gov (United States)

    Sander, S. P.; Friedl, R. R.; Barker, J. R.; Golden, D. M.; Kurylo, M. J.; Wine, P. H.; Abbatt, J.; Burkholder, J. B.; Kolb, C. E.; Moortgat, G. K.; hide

    2009-01-01

    This is the supplement to the fifteenth in a series of evaluated sets of rate constants and photochemical cross sections compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. Copies of this evaluation are available in electronic form and may be printed from the following Internet URL: http://jpldataeval.jpl.nasa.gov/.

  1. Small Animal Models for Evaluating Filovirus Countermeasures.

    Science.gov (United States)

    Banadyga, Logan; Wong, Gary; Qiu, Xiangguo

    2018-05-11

    The development of novel therapeutics and vaccines to treat or prevent disease caused by filoviruses, such as Ebola and Marburg viruses, depends on the availability of animal models that faithfully recapitulate clinical hallmarks of disease as it is observed in humans. In particular, small animal models (such as mice and guinea pigs) are historically and frequently used for the primary evaluation of antiviral countermeasures, prior to testing in nonhuman primates, which represent the gold-standard filovirus animal model. In the past several years, however, the filovirus field has witnessed the continued refinement of the mouse and guinea pig models of disease, as well as the introduction of the hamster and ferret models. We now have small animal models for most human-pathogenic filoviruses, many of which are susceptible to wild type virus and demonstrate key features of disease, including robust virus replication, coagulopathy, and immune system dysfunction. Although none of these small animal model systems perfectly recapitulates Ebola virus disease or Marburg virus disease on its own, collectively they offer a nearly complete set of tools in which to carry out the preclinical development of novel antiviral drugs.

  2. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Bioenergetics models to estimate numbers of larval lampreys consumed by smallmouth bass in Elk Creek, Oregon

    Science.gov (United States)

    Schultz, Luke; Heck, Michael; Kowalski, Brandon M; Eagles-Smith, Collin A.; Coates, Kelly C.; Dunham, Jason B.

    2017-01-01

    Nonnative fishes have been increasingly implicated in the decline of native fishes in the Pacific Northwest. Smallmouth Bass Micropterus dolomieu were introduced into the Umpqua River in southwest Oregon in the early 1960s. The spread of Smallmouth Bass throughout the basin coincided with a decline in counts of upstream-migrating Pacific Lampreys Entosphenus tridentatus. This suggested the potential for ecological interactions between Smallmouth Bass and Pacific Lampreys, as well as freshwater-resident Western Brook Lampreys Lampetra richardsoni. To evaluate the potential effects of Smallmouth Bass on lampreys, we sampled diets of Smallmouth Bass and used bioenergetics models to estimate consumption of larval lampreys in a segment of Elk Creek, a tributary to the lower Umpqua River. We captured 303 unique Smallmouth Bass (mean: 197 mm and 136 g) via angling in July and September. We combined information on Smallmouth Bass diet and energy density with other variables (temperature, body size, growth, prey energy density) in a bioenergetics model to estimate consumption of larval lampreys. Larval lampreys were found in 6.2% of diet samples, and model estimates indicated that the Smallmouth Bass we captured consumed 925 larval lampreys in this 2-month study period. When extrapolated to a population estimate of Smallmouth Bass in this segment, we estimated 1,911 larval lampreys were consumed between July and September. Although the precision of these estimates was low, this magnitude of consumption suggests that Smallmouth Bass may negatively affect larval lamprey populations.

  4. HYSOGs250m, global gridded hydrologic soil groups for curve-number-based runoff modeling.

    Science.gov (United States)

    Ross, C Wade; Prihodko, Lara; Anchang, Julius; Kumar, Sanath; Ji, Wenjie; Hanan, Niall P

    2018-05-15

    Hydrologic soil groups (HSGs) are a fundamental component of the USDA curve-number (CN) method for estimation of rainfall runoff; yet these data are not readily available in a format or spatial-resolution suitable for regional- and global-scale modeling applications. We developed a globally consistent, gridded dataset defining HSGs from soil texture, bedrock depth, and groundwater. The resulting data product-HYSOGs250m-represents runoff potential at 250 m spatial resolution. Our analysis indicates that the global distribution of soil is dominated by moderately high runoff potential, followed by moderately low, high, and low runoff potential. Low runoff potential, sandy soils are found primarily in parts of the Sahara and Arabian Deserts. High runoff potential soils occur predominantly within tropical and sub-tropical regions. No clear pattern could be discerned for moderately low runoff potential soils, as they occur in arid and humid environments and at both high and low elevations. Potential applications of this data include CN-based runoff modeling, flood risk assessment, and as a covariate for biogeographical analysis of vegetation distributions.

  5. Scaling and interaction of self-similar modes in models of high Reynolds number wall turbulence.

    Science.gov (United States)

    Sharma, A S; Moarref, R; McKeon, B J

    2017-03-13

    Previous work has established the usefulness of the resolvent operator that maps the terms nonlinear in the turbulent fluctuations to the fluctuations themselves. Further work has described the self-similarity of the resolvent arising from that of the mean velocity profile. The orthogonal modes provided by the resolvent analysis describe the wall-normal coherence of the motions and inherit that self-similarity. In this contribution, we present the implications of this similarity for the nonlinear interaction between modes with different scales and wall-normal locations. By considering the nonlinear interactions between modes, it is shown that much of the turbulence scaling behaviour in the logarithmic region can be determined from a single arbitrarily chosen reference plane. Thus, the geometric scaling of the modes is impressed upon the nonlinear interaction between modes. Implications of these observations on the self-sustaining mechanisms of wall turbulence, modelling and simulation are outlined.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  6. A hybrid method of grey relational analysis and data envelopment analysis for evaluating and selecting efficient suppliers plus a novel ranking method for grey numbers

    Directory of Open Access Journals (Sweden)

    Mohsen Sayyah Markabi

    2014-10-01

    Full Text Available Purpose: Evaluation and selection of efficient suppliers is one of the key issues in supply chain management which depends on wide range of qualitative and quantitative criteria. The aim of this research is to develop a mathematical model for evaluating and selecting efficient suppliers when faced with supply and demand uncertainties.Design/methodology/approach: In this research Grey Relational Analysis (GRA and Data Envelopment Analysis (DEA are used to evaluate and select efficient suppliers under uncertainties. Furthermore, a novel ranking method is introduced for the units that their efficiencies are obtained in the form of interval grey numbers.Findings: The study indicates that the proposed model in addition to providing satisfactory and acceptable results avoids time-consuming computations and consequently reduces the solution time. To name another advantage of the proposed model, we can point out that it enables us to make decision based on different levels of risk.Originality/value: The paper presents a mathematical model for evaluating and selecting efficient suppliers in a stochastic environment so that companies can use in order to make better decisions.

  7. Simulations, evaluations and models. Vol. 1

    International Nuclear Information System (INIS)

    Brehmer, B.; Leplat, J.

    1992-01-01

    Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)

  8. A methodology for spectral wave model evaluation

    Science.gov (United States)

    Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.

    2017-12-01

    Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave

  9. Intuitionistic fuzzy (IF) evaluations of multidimensional model

    International Nuclear Information System (INIS)

    Valova, I.

    2012-01-01

    There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)

  10. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  11. Photovoltaic performance models: an evaluation with actual field data

    Science.gov (United States)

    TamizhMani, Govindasamy; Ishioye, John-Paul; Voropayev, Arseniy; Kang, Yi

    2008-08-01

    Prediction of energy production is crucial to the design and installation of the building integrated photovoltaic systems. This prediction should be attainable based on the commonly available parameters such as system size, orientation and tilt angle. Several commercially available as well as free downloadable software tools exist to predict energy production. Six software models have been evaluated in this study and they are: PV Watts, PVsyst, MAUI, Clean Power Estimator, Solar Advisor Model (SAM) and RETScreen. This evaluation has been done by comparing the monthly, seasonaly and annually predicted data with the actual, field data obtained over a year period on a large number of residential PV systems ranging between 2 and 3 kWdc. All the systems are located in Arizona, within the Phoenix metropolitan area which lies at latitude 33° North, and longitude 112 West, and are all connected to the electrical grid.

  12. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  13. Evaluation of models of waste glass durability

    International Nuclear Information System (INIS)

    Ellison, A.

    1995-01-01

    The main variable under the control of the waste glass producer is the composition of the glass; thus a need exists to establish functional relationships between the composition of a waste glass and measures of processability, product consistency, and durability. Many years of research show that the structure and properties of a glass depend on its composition, so it seems reasonable to assume that there also is relationship between the composition of a waste glass and its resistance to attack by an aqueous solution. Several models have been developed to describe this dependence, and an evaluation their predictive capabilities is the subject of this paper. The objective is to determine whether any of these models describe the ''correct'' functional relationship between composition and corrosion rate. A more thorough treatment of the relationships between glass composition and durability has been presented elsewhere, and the reader is encouraged to consult it for a more detailed discussion. The models examined in this study are the free energy of hydration model, developed at the Savannah River Laboratory, the structural bond strength model, developed at the Vitreous State Laboratory at the Catholic University of America, and the Composition Variation Study, developed at Pacific Northwest Laboratory

  14. Evaluation of onset of nucleate boiling models

    Energy Technology Data Exchange (ETDEWEB)

    Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net

    2009-07-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  15. Evaluation of onset of nucleate boiling models

    International Nuclear Information System (INIS)

    Huang, LiDong

    2009-01-01

    This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)

  16. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  17. Evaluating Translational Research: A Process Marker Model

    Science.gov (United States)

    Trochim, William; Kane, Cathleen; Graham, Mark J.; Pincus, Harold A.

    2011-01-01

    Abstract Objective: We examine the concept of translational research from the perspective of evaluators charged with assessing translational efforts. One of the major tasks for evaluators involved in translational research is to help assess efforts that aim to reduce the time it takes to move research to practice and health impacts. Another is to assess efforts that are intended to increase the rate and volume of translation. Methods: We offer an alternative to the dominant contemporary tendency to define translational research in terms of a series of discrete “phases.”Results: We contend that this phased approach has been confusing and that it is insufficient as a basis for evaluation. Instead, we argue for the identification of key operational and measurable markers along a generalized process pathway from research to practice. Conclusions: This model provides a foundation for the evaluation of interventions designed to improve translational research and the integration of these findings into a field of translational studies. Clin Trans Sci 2011; Volume 4: 153–162 PMID:21707944

  18. Evaluation of the association between sexual dysfunction and demyelinating plaque location and number in female multiple sclerosis patients.

    Science.gov (United States)

    Solmaz, Volkan; Ozlece, Hatice Kose; Him, Aydın; Güneş, Ayfer; Cordano, Christian; Aksoy, Durdane; Çelik, Yahya

    2018-04-17

    Purpose To investigate the frequency of sexual dysfunction (SD) in female multiple sclerosis (MS) patients and to explore its association with the location and number of demyelinating lesions. Material and Methods We evaluated 42 female patients and 41 healthy subjects. All patients underwent neurological examination and 1.5 T brain and full spinal MRI. All subjects completed the female sexual function index (FSFI), Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), and Short-Form 36 Quality of Life Scale (SF-36). All participants were also evaluated for serum thyroid stimulating hormone (TSH), T4, estradiol, and total testosterone. Results No statistically significant differences between the MS and control groups were found for age, body mass index (BMI), serum TSH, T4, E2, and total testosterone level. MS patients had a statistically significantly lower FSFI and SF-36 scores and higher BDI and BAI scores compared with healthy subjects. The location and number of demyelinating lesions were not associated with SD. Conclusion In our cohort, this difference in SD appears unrelated to the location and number of demyelinating lesions. These findings highlight the importance of the assessment and treatment of psychiatric comorbidities, such as depression and anxiety, in MS patients reporting SD.

  19. Maillard reaction products from highly heated food prevent mast cell number increase and inflammation in a mouse model of colitis.

    Science.gov (United States)

    Al Amir, Issam; Dubayle, David; Héron, Anne; Delayre-Orthez, Carine; Anton, Pauline M

    2017-12-01

    Links between food and inflammatory bowel diseases (IBDs) are often suggested, but the role of food processing has not been extensively studied. Heat treatment is known to cause the loss of nutrients and the appearance of neoformed compounds such as Maillard reaction products. Their involvement in gut inflammation is equivocal, as some may have proinflammatory effects, whereas other seem to be protective. As IBDs are associated with the recruitment of immune cells, including mast cells, we raised the hypothesis that dietary Maillard reaction products generated through heat treatment of food may limit the colitic response and its associated recruitment of mast cells. An experimental model of colitis was used in mice submitted to mildly and highly heated rodent food. Adult male mice were divided in 3 groups and received nonheated, mildly heated, or highly heated chow during 21 days. In the last week of the study, each group was split into 2 subgroups, submitted or not (controls) to dextran sulfate sodium (DSS) colitis. Weight variations, macroscopic lesions, colonic myeloperoxidase activity, and mucosal mast cell number were evaluated at the end of the experiment. Only highly heated chow significantly prevented DSS-induced weight loss, myeloperoxidase activity, and mast cell number increase in the colonic mucosa of DSS-colitic mice. We suggest that Maillard reaction products from highly heated food may limit the occurrence of inflammatory phases in IBD patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A Novel Wake Oscillator Model for Vortex-Induced Vibrations Prediction of A Cylinder Considering the Influence of Reynolds Number

    Science.gov (United States)

    Gao, Xi-feng; Xie, Wu-de; Xu, Wan-hai; Bai, Yu-chuan; Zhu, Hai-tao

    2018-04-01

    It is well known that the Reynolds number has a significant effect on the vortex-induced vibrations (VIV) of cylinders. In this paper, a novel in-line (IL) and cross-flow (CF) coupling VIV prediction model for circular cylinders has been proposed, in which the influence of the Reynolds number was comprehensively considered. The Strouhal number linked with the vortex shedding frequency was calculated through a function of the Reynolds number. The coefficient of the mean drag force was fitted as a new piecewise function of the Reynolds number, and its amplification resulted from the CF VIV was also taken into account. The oscillating drag and lift forces were modelled with classical van der Pol wake oscillators and their empirical parameters were determined based on the lock-in boundaries and the peak-amplitude formulas. A new peak-amplitude formula for the IL VIV was developed under the resonance condition with respect to the mass-damping ratio and the Reynolds number. When compared with the results from the experiments and some other prediction models, the present model could give good estimations on the vibration amplitudes and frequencies of the VIV both for elastically-mounted rigid and long flexible cylinders. The present model considering the influence of the Reynolds number could generally provide better results than that neglecting the effect of the Reynolds number.

  1. REPFLO model evaluation, physical and numerical consistency

    International Nuclear Information System (INIS)

    Wilson, R.N.; Holland, D.H.

    1978-11-01

    This report contains a description of some suggested changes and an evaluation of the REPFLO computer code, which models ground-water flow and nuclear-waste migration in and about a nuclear-waste repository. The discussion contained in the main body of the report is supplemented by a flow chart, presented in the Appendix of this report. The suggested changes are of four kinds: (1) technical changes to make the code compatible with a wider variety of digital computer systems; (2) changes to fill gaps in the computer code, due to missing proprietary subroutines; (3) changes to (a) correct programming errors, (b) correct logical flaws, and (c) remove unnecessary complexity; and (4) changes in the computer code logical structure to make REPFLO a more viable model from the physical point of view

  2. Impact of Transport Zone Number in Simulation Models on Cost-Benefit Analysis Results in Transport Investments

    Science.gov (United States)

    Chmielewski, Jacek

    2017-10-01

    Nowadays, feasibility studies need to be prepared for all planned transport investments, mainly those co-financed with UE grants. One of the fundamental aspect of feasibility study is the economic justification of an investment, evaluated in an area of so called cost-benefit analysis (CBA). The main goal of CBA calculation is to prove that a transport investment is really important for the society and should be implemented as economically efficient one. It can be said that the number of hours (PH - passengers hours) in trips and travelled kilometres (PK - passengers kilometres) are the most important for CBA results. The differences between PH and PK calculated for particular investment scenarios are the base for benefits calculation. Typically, transport simulation models are the best source for such data. Transport simulation models are one of the most powerful tools for transport network planning. They make it possible to evaluate forecast traffic volume and passenger flows in a public transport system for defined scenarios of transport and area development. There are many different transport models. Their construction is often similar, and they mainly differ in the level of their accuracy. Even models for the same area may differ in this matter. Typically, such differences come from the accuracy of supply side representation: road and public transport network representation. In many cases only main roads and a public transport network are represented, while local and service roads are eliminated as a way of reality simplification. This also enables a faster and more effective calculation process. On the other hand, the description of demand part of these models based on transport zones is often stable. Difficulties with data collection, mainly data on land use, resulted in the lack of changes in the analysed land division into so called transport zones. In this paper the author presents an influence of land division on the results of traffic analyses, and hence

  3. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  4. Automated expert modeling for automated student evaluation.

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, Robert G.

    2006-01-01

    The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.

  5. CTBT Integrated Verification System Evaluation Model

    Energy Technology Data Exchange (ETDEWEB)

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  6. An evaluation framework for participatory modelling

    Science.gov (United States)

    Krueger, T.; Inman, A.; Chilvers, J.

    2012-04-01

    Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in

  7. Medical Updates Number 5 to the International Space Station Probability Risk Assessment (PRA) Model Using the Integrated Medical Model

    Science.gov (United States)

    Butler, Doug; Bauman, David; Johnson-Throop, Kathy

    2011-01-01

    The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.

  8. Determination of reliable force platform parameters and number of trial to evaluate sit-to-stand movement.

    Science.gov (United States)

    Chorin, Frédéric; Rahmani, Abderrahmane; Beaune, Bruno; Cornu, Christophe

    2015-08-01

    Sit-to-stand (STS) movement is useful for evaluating lower limb muscle function, especially from force platforms. Nevertheless, due to a lack of standardization of the STS movement (e.g., position, subject's instructions, etc.), it is difficult to compare results obtained in previous studies. The aim of the present study was to determine the most relevant condition, parameters, and number of trial to perform STS movements. In this study, STS mechanical (maximal and mean force, impulse) and temporal parameters were measured in the vertical, medio-lateral and antero-posterior axes using a force platform. Five STS conditions (i.e., with or without armrests, variation of the height of the chair and the movement speed) were analyzed to evaluate repeatability of different standardized procedures. Most of the mechanical and temporal parameters were influenced by the STS condition (p movement.

  9. Modeling the Aerodynamic Lift Produced by Oscillating Airfoils at Low Reynolds Number

    OpenAIRE

    Khalid, Muhammad Saif Ullah; Akhtar, Imran

    2014-01-01

    For present study, setting Strouhal Number (St) as control parameter, numerical simulations for flow past oscillating NACA-0012 airfoil at 1,000 Reynolds Numbers (Re) are performed. Temporal profiles of unsteady forces; lift and thrust, and their spectral analysis clearly indicate the solution to be a period-1 attractor for low Strouhal numbers. This study reveals that aerodynamic forces produced by plunging airfoil are independent of initial kinematic conditions of airfoil that proves the ex...

  10. Numerical Investigation of Transitional Flow over a Backward Facing Step Using a Low Reynolds Number k-ε Model

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter V.

    In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...

  11. Model Experiments with Low Reynolds Number Effects in a Ventilated Room

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Filholm, Claus; Topp, Claus

    the isothermal low Reynolds number flow from a slot inlet in the end wall of the room. The experiments are made on the scale of 1 to 5. Measurements indicate a low Reynolds number effect in the wall jet flow. The virtual origin of the wall jet moves forward in front of the opening at a small Reynolds number......, an effect that is also known from measurements on free jets. The growth rate of the jet, or the length scale, increases and the velocity decay factor decreases at small Reynolds numbers....

  12. Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System

    Science.gov (United States)

    Rezaei, Fatemeh; Yarmohammadian, Mohmmad H.; Haghshenas, Abbas; Fallah, Ali; Ferdosi, Masoud

    2018-01-01

    Background: Methodology of Failure Mode and Effects Analysis (FMEA) is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN)” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1) defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss), occurrence of incident (time - unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) then, (2) risks priority criteria quantified by using RPN index (361 for the highest rate failure). The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models. PMID:29441184

  13. Revised risk priority number in failure mode and effects analysis model from the perspective of healthcare system

    Directory of Open Access Journals (Sweden)

    Fatemeh Rezaei

    2018-01-01

    Full Text Available Background: Methodology of Failure Mode and Effects Analysis (FMEA is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1 defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss, occurrence of incident (time - unit occurrence and degree of exposure to risk and preventability (degree of preventability and defensive barriers then, (2 risks priority criteria quantified by using RPN index (361 for the highest rate failure. The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models.

  14. Evaluating estimators for numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    Science.gov (United States)

    Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.

    2007-01-01

    Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.

  15. Predicting the number and sizes of IBD regions among family members and evaluating the family size requirement for linkage studies.

    Science.gov (United States)

    Yang, Wanling; Wang, Zhanyong; Wang, Lusheng; Sham, Pak-Chung; Huang, Peng; Lau, Yu Lung

    2008-12-01

    With genotyping of high-density single nucleotide polymorphisms (SNPs) replacing that of microsatellite markers in linkage studies, it becomes possible to accurately determine the genomic regions shared identity by descent (IBD) by family members. In addition to evaluating the likelihood of linkage for a region with the underlining disease (the LOD score approach), an appropriate question to ask is what would be the expected number and sizes of IBD regions among the affecteds, as there could be more than one region reaching the maximum achievable LOD score for a given family. Here, we introduce a computer program to allow the prediction of the total number of IBD regions among family members and their sizes. Reversely, it can be used to predict the portion of the genome that can be excluded from consideration according to the family size and user-defined inheritance mode and penetrance. Such information has implications on the feasibility of conducting linkage analysis on a given family of certain size and structure or on a few small families when interfamily homogeneity can be assumed. It can also help determine the most relevant members to be genotyped for such a study. Simulation results showed that the IBD regions containing true mutations are usually larger than regions IBD due to random chance. We have made use of this feature in our program to allow evaluation of the identified IBD regions based on Bayesian probability calculation and simulation results.

  16. The effect of the number of seed variables on the performance of Cooke′s classical model

    International Nuclear Information System (INIS)

    Eggstaff, Justin W.; Mazzuchi, Thomas A.; Sarkani, Shahram

    2014-01-01

    In risk analysis, Cooke′s classical model for aggregating expert judgment has been widely used for over 20 years. However, the validity of this model has been the subject of much debate. Critics assert that this model′s scoring rule may unintentionally reward experts who manipulate their quantile estimates in order to receive a greater weight. In addition, the question of the number of seed variables required to ensure adequate performance of Cooke′s classical model remains unanswered. In this study, we conduct a comprehensive examination of the model through an iterative, cross validation test to perform an out-of-sample comparison between Cooke′s classical model and the equal-weight linear opinion pool method on almost all of the expert judgment studies compiled by Cooke and colleagues to date. Our results indicate that Cooke′s classical model significantly outperforms equally weighting expert judgment, regardless of the number of seed variables used; however, there may, in fact, be a maximum number of seed variables beyond which Cooke′s model cannot outperform an equally-weighted panel. - Highlights: • We examine Cooke′s classical model through an iterative, cross validation test. • The performance-based and equally weighted decision makers are compared. • Results strengthen Cooke′s argument for a two-fold cross-validation approach. • Accuracy test results show strong support in favor of Cooke′s classical method. • There may be a maximum number of seed variables that ensures model performance

  17. A model for roll stall and the inherent stability modes of low aspect ratio wings at low Reynolds numbers

    Science.gov (United States)

    Shields, Matt

    The development of Micro Aerial Vehicles has been hindered by the poor understanding of the aerodynamic loading and stability and control properties of the low Reynolds number regime in which the inherent low aspect ratio (LAR) wings operate. This thesis experimentally evaluates the static and damping aerodynamic stability derivatives to provide a complete aerodynamic model for canonical flat plate wings of aspect ratios near unity at Reynolds numbers under 1 x 105. This permits the complete functionality of the aerodynamic forces and moments to be expressed and the equations of motion to solved, thereby identifying the inherent stability properties of the wing. This provides a basis for characterizing the stability of full vehicles. The influence of the tip vortices during sideslip perturbations is found to induce a loading condition referred to as roll stall, a significant roll moment created by the spanwise induced velocity asymmetry related to the displacement of the vortex cores relative to the wing. Roll stall is manifested by a linearly increasing roll moment with low to moderate angles of attack and a subsequent stall event similar to a lift polar; this behavior is not experienced by conventional (high aspect ratio) wings. The resulting large magnitude of the roll stability derivative, Cl,beta and lack of roll damping, Cl ,rho, create significant modal responses of the lateral state variables; a linear model used to evaluate these modes is shown to accurately reflect the solution obtained by numerically integrating the nonlinear equations. An unstable Dutch roll mode dominates the behavior of the wing for small perturbations from equilibrium, and in the presence of angle of attack oscillations a previously unconsidered coupled mode, referred to as roll resonance, is seen develop and drive the bank angle? away from equilibrium. Roll resonance requires a linear time variant (LTV) model to capture the behavior of the bank angle, which is attributed to the

  18. Evaluation of Student's Environment by DEA Models

    Directory of Open Access Journals (Sweden)

    F. Moradi

    2016-11-01

    Full Text Available The important question here is, is there real evaluation in educational advance? In other words, if a student has been successful in mathematics or has been unsuccessful in mathematics, is it possible to find the reasons behind his advance or, is it possible to find the reasons behind his advance or weakness? If we want to respond to this significant question, it should be said that factors of educational advance must be divided into 5 main groups. 1-family, 2-teacher, 3- students 4-school and 5-manager of 3 schools It can then be said that a student's score does not just depend on a factor that people have imaged From this, it can be concluded that by using the DEA and SBM models, each student's efficiency must be researched and the factors of the student's strengths and weaknesses must be analyzed.

  19. Fuzzy model to estimate the number of hospitalizations for asthma and pneumonia under the effects of air pollution.

    Science.gov (United States)

    Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha

    2017-06-22

    Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna

  20. RTMOD: Real-Time MODel evaluation

    International Nuclear Information System (INIS)

    Graziani, G; Galmarini, S.; Mikkelsen, T.

    2000-01-01

    The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime

  1. RIDE vs. CLASP Comparison and Evaluation: Models and Parameters

    Science.gov (United States)

    2007-04-01

    the model is declared "No Decision," indicating that neither QLRM nor LLRM provides a satisfactory fit. If p-value ( i 2j ) > p-value ( flj ), then our...evaluated at A = D = loo tD (7) UA/D(70,40) = 37.35 (8) UA/D(70,100) = 22.93 (9) OUA/D = 0 when evaluated at A = 70, D = 65.7 aD Note the number of...M U) U) C)U)f-~H W 0 U) co CQ CU)QC) W W 0 O40U) El)U tq D O U) U) E U) Z 04 X L X 4j c)c: o n n , f)cj" D ,m u)o mm o:zz .o tD o U) Uti)U) ’-4 r-4

  2. Model for modulated and chaotic waves in zero-Prandtl-number ...

    Indian Academy of Sciences (India)

    KCD) [20] for thermal convection in zero-Prandtl-number fluids in the presence of Coriolis force showed the possibility of self-tuned temporal quasiperiodic waves at the onset of thermal convection. However, the effect of modulation when the.

  3. World Integrated Nuclear Evaluation System: Model documentation

    International Nuclear Information System (INIS)

    1991-12-01

    The World Integrated Nuclear Evaluation System (WINES) is an aggregate demand-based partial equilibrium model used by the Energy Information Administration (EIA) to project long-term domestic and international nuclear energy requirements. WINES follows a top-down approach in which economic growth rates, delivered energy demand growth rates, and electricity demand are projected successively to ultimately forecast total nuclear generation and nuclear capacity. WINES could be potentially used to produce forecasts for any country or region in the world. Presently, WINES is being used to generate long-term forecasts for the United States, and for all countries with commercial nuclear programs in the world, excluding countries located in centrally planned economic areas. Projections for the United States are developed for the period from 2010 through 2030, and for other countries for the period starting in 2000 or 2005 (depending on the country) through 2010. EIA uses a pipeline approach to project nuclear capacity for the period between 1990 and the starting year for which the WINES model is used. This approach involves a detailed accounting of existing nuclear generating units and units under construction, their capacities, their actual or estimated time of completion, and the estimated date of retirements. Further detail on this approach can be found in Appendix B of Commercial Nuclear Power 1991: Prospects for the United States and the World

  4. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  5. Rotating Square-Ended U-Bend Using Low-Reynolds-Number Models

    Directory of Open Access Journals (Sweden)

    Konstantinos-Stephen P. Nikas

    2005-01-01

    bend is better reproduced by the low-Re models. Turbulence levels within the rotating U-bend are underpredicted, but DSM models produce a more realistic distribution. Along the leading side, all models overpredict heat transfer levels just after the bend. Along the trailing side, the heat transfer predictions of the low-Re DSM with the NYap, are close to the measurements.

  6. Evaluation of substitution monopole models for tire noise sound synthesis

    Science.gov (United States)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  7. Tadpoles, anomaly cancellation and the expectation value of the number of the Higgs particles in the standard model

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2005-01-01

    We motivate the concept of infinitely large and hierarchical matrices in connection with the eight-dimensional super Riemannian tensor and the unification of all fundamental forces. Subsequently, we derive the number of particle-like states and the expectation value of the number of elementary particle content of a minimally extended standard model using the total number of tadpoles and anomaly cancellation condition:nH+29nt-nv=R(8)-N(SM)=2α-bar 0-1=273 where n H is the number of hyper multiplets, n t the number of tensor multiplets, n v the vector multiplets, R (8) is the number of independent components of Riemann's curvature tensor in eight dimensions, N(SM) is the number of elementary particles content of the standard model and α-bar 0 is the inverse fine structure constant. We can conclude that N(SM)=66. Consequently, we conjecture that five Higgs particles should be involved in the standard model

  8. Evaluation of the Current State of Integrated Water Quality Modelling

    Science.gov (United States)

    Arhonditsis, G. B.; Wellen, C. C.; Ecological Modelling Laboratory

    2010-12-01

    Environmental policy and management implementation require robust methods for assessing the contribution of various point and non-point pollution sources to water quality problems as well as methods for estimating the expected and achieved compliance with the water quality goals. Water quality models have been widely used for creating the scientific basis for management decisions by providing a predictive link between restoration actions and ecosystem response. Modelling water quality and nutrient transport is challenging due a number of constraints associated with the input data and existing knowledge gaps related to the mathematical description of landscape and in-stream biogeochemical processes. While enormous effort has been invested to make watershed models process-based and spatially-distributed, there has not been a comprehensive meta-analysis of model credibility in watershed modelling literature. In this study, we evaluate the current state of integrated water quality modeling across the range of temporal and spatial scales typically utilized. We address several common modeling questions by providing a quantitative assessment of model performance and by assessing how model performance depends on model development. The data compiled represent a heterogeneous group of modeling studies, especially with respect to complexity, spatial and temporal scales and model development objectives. Beginning from 1992, the year when Beven and Binley published their seminal paper on uncertainty analysis in hydrological modelling, and ending in 2009, we selected over 150 papers fitting a number of criteria. These criteria involved publications that: (i) employed distributed or semi-distributed modelling approaches; (ii) provided predictions on flow and nutrient concentration state variables; and (iii) reported fit to measured data. Model performance was quantified with the Nash-Sutcliffe Efficiency, the relative error, and the coefficient of determination. Further, our

  9. Evaluating to Solve Educational Problems: An Alternative Model.

    Science.gov (United States)

    Friedman, Myles I.; Anderson, Lorin W.

    1979-01-01

    A 19-step general evaluation model is described through its four stages: identifying problems, prescribing program solutions, evaluating the operation of the program, and evaluating the effectiveness of the model. The role of the evaluator in decision making is also explored. (RAO)

  10. A Model for Evaluating Student Clinical Psychomotor Skills.

    Science.gov (United States)

    And Others; Fiel, Nicholas J.

    1979-01-01

    A long-range plan to evaluate medical students' physical examination skills was undertaken at the Ingham Family Medical Clinic at Michigan State University. The development of the psychomotor skills evaluation model to evaluate the skill of blood pressure measurement, tests of the model's reliability, and the use of the model are described. (JMD)

  11. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  12. Quantum phase crossovers with finite atom number in the Dicke model

    International Nuclear Information System (INIS)

    Hirsch, J G; Castaños, O; Nahmad-Achar, E; López-Peña, R

    2013-01-01

    Two-level atoms interacting with a one-mode cavity field at zero temperature have order parameters which reflect the presence of a quantum phase transition at a critical value of the atom–cavity coupling strength. Two popular examples are the number of photons inside the cavity and the number of excited atoms. Coherent states provide a mean field description, which becomes exact in the thermodynamic limit. Employing symmetry-adapted (SA) SU(2) coherent states the quantum crossover, precursor of the critical behavior, can be described for a finite number of atoms. A variation after projection treatment, involving a numerical minimization of the SA energy surface, associates the quantum crossover with a discontinuity in the order parameters, which originates from competition between two local minima in the SA energy surface. Although this discontinuity is not present in finite systems, it provides a good description of 1/N effects in the observables. (paper)

  13. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  14. Implications of horizontal symmetries on baryon number violation in supersymmetric models

    International Nuclear Information System (INIS)

    Ben-Hamo, V.; Nir, Y.

    1994-08-01

    The smallness of the quark and lepton parameters and the hierarchy between them could be the result of selection rules due to a horizontal symmetry broken by a small parameter. The same selection rules apply to baryon number violating terms. Consequently, the problem of baryon number violation in supersymmetry may be solved naturally, without invoking any especially-designed extra symmetry. This mechanism is efficient enough even for low-scale flavor physics. Proton decay is likely to be dominated by the modes K + ν-bar i or K o μ + (e + ), and may proceed at observable rates. (authors). 15 refs

  15. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  16. [Evaluation of different sets of variable number of tandem repeats ioci for genotyping Mycobacterium tuberculosis isolates in China].

    Science.gov (United States)

    Liu, Mei; Luo, Tao; Yang, Chongguang; Liu, Qingyun; Gao, Qian

    2015-10-01

    To identify a variable number of tandem repeats (VNTR) typing method that is suitable for molecular epidemiological study of tuberculosis in China. We systematically evaluated the commonly used VNTR typing methods, including 4 methods (MIRU-12, VNTR-15/VNTR-24 and VNTR "24+4") proposed by foreign colleagues and 2 methods (VNTR-L15 and VNTR"9+3") developed by domestic researchers using population-based collection of 891 clinical isolates from 5 provinces across the country. The order (from high to low) of discriminatory power for the 6 VNTR typing methods was VNTR"24+4", VNTR"9+3", VNTR-24, VNTR-15, VNTR-L15 and MIRU-12. The discriminatory power of VNTR"9+3" was comparable with VNTR"24+4" and higher than that of VNTR-15/24. The concordance for defining clustered and unique genotypes between VNTR"9+3" and VNTR"24+4" was 96.59%. Our results suggest that VNTR"9+3" is a suitable method for molecular typing of M. tuberculosis in China by considering its high discriminatory power, high consistency with VNTR"24+4" and relative small number of VNTR locus.

  17. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  18. DTIC Review: Human, Social, Cultural and Behavior Modeling. Volume 9, Number 1 (CD-ROM)

    National Research Council Canada - National Science Library

    2008-01-01

    ...: Human, Social, Cultural and Behavior (HSCB) models are designed to help understand the structure, interconnections, dependencies, behavior, and trends associated with any collection of individuals...

  19. Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation

    Science.gov (United States)

    Papadopoulos, Ioannis; Iatridou, Maria

    2010-01-01

    This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…

  20. Positioning and number of nutritional levels in dose-response trials to estimate the optimal-level and the adjustment of the models

    Directory of Open Access Journals (Sweden)

    Fernando Augusto de Souza

    2014-07-01

    Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.

  1. Comparison of the 1981 INEL dispersion data with results from a number of different models

    Energy Technology Data Exchange (ETDEWEB)

    Lewellen, W S; Sykes, R I; Parker, S F

    1985-05-01

    The results from simulations by 12 different dispersion models are compared with observations from an extensive field experiment conducted by the Nuclear Regulatory Commission at the Idaho National Engineering Laboratory in July, 1981. Comparisons were made on the bases of hourly SF/sub 6/ samples taken at the surface, out to approximately 10 km from the 46 m release tower, both during and following 7 different 8-hour releases. Comparisons are also made for total integrated doses collected out to approximately 40 km. Three classes of models are used. Within the limited range appropriate for Class A models this data comparison shows that neither the puff models or the transport and diffusion models agree with the data any better than the simple Gaussian plume models. The puff and transport and diffusion models do show a slight edge in performance in comparison with the total dose over the extended range approximate for class B models. The best model results for the hourly samples show approximately 40% calculated within a factor of two when a 15/sup 0/ uncertainty in plume position is permitted and it is assumed that higher data samples may occur at stations between the actual sample sites. This is increased to 60% for the 12 hour integrated dose and 70% for the total integrated dose when the same performance measure is used. None of the models reproduce the observed patchy dose patterns. This patchiness is consistent with the discussion of the inherent uncertainty associated with time averaged plume observations contained in our companion reports on the scientific critique of available models.

  2. [Application of ARIMA model to predict number of malaria cases in China].

    Science.gov (United States)

    Hui-Yu, H; Hua-Qin, S; Shun-Xian, Z; Lin, A I; Yan, L U; Yu-Chun, C; Shi-Zhu, L I; Xue-Jiao, T; Chun-Li, Y; Wei, H U; Jia-Xu, C

    2017-08-15

    Objective To study the application of autoregressive integrated moving average (ARIMA) model to predict the monthly reported malaria cases in China, so as to provide a reference for prevention and control of malaria. Methods SPSS 24.0 software was used to construct the ARIMA models based on the monthly reported malaria cases of the time series of 20062015 and 2011-2015, respectively. The data of malaria cases from January to December, 2016 were used as validation data to compare the accuracy of the two ARIMA models. Results The models of the monthly reported cases of malaria in China were ARIMA (2, 1, 1) (1, 1, 0) 12 and ARIMA (1, 0, 0) (1, 1, 0) 12 respectively. The comparison between the predictions of the two models and actual situation of malaria cases showed that the ARIMA model based on the data of 2011-2015 had a higher accuracy of forecasting than the model based on the data of 2006-2015 had. Conclusion The establishment and prediction of ARIMA model is a dynamic process, which needs to be adjusted unceasingly according to the accumulated data, and in addition, the major changes of epidemic characteristics of infectious diseases must be considered.

  3. 438 Optimal Number of States in Hidden Markov Models and its ...

    African Journals Online (AJOL)

    In this paper, Hidden Markov Model is applied to model human movements as to .... emit either discrete information or a continuous data derived from a Probability .... For each hidden state in the test set, the probability = ... by applying the Kullback-Leibler distance (Juang & Rabiner, 1985) which ..... One Size Does Not Fit.

  4. Pseudorandom numbers: evolutionary models in image processing, biology, and nonlinear dynamic systems

    Science.gov (United States)

    Yaroslavsky, Leonid P.

    1996-11-01

    We show that one can treat pseudo-random generators, evolutionary models of texture images, iterative local adaptive filters for image restoration and enhancement and growth models in biology and material sciences in a unified way as special cases of dynamic systems with a nonlinear feedback.

  5. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  6. MODEL-OBSERVATION COMPARISONS OF ELECTRON NUMBER DENSITIES IN THE COMA OF 67P/CHURYUMOV–GERASIMENKO DURING 2015 JANUARY

    Energy Technology Data Exchange (ETDEWEB)

    Vigren, E.; Edberg, N. J. T.; Eriksson, A. I.; Johansson, F.; Odelstad, E. [Swedish Institute of Space Physics, Uppsala (Sweden); Altwegg, K.; Tzou, C.-Y. [Physikalisches Institut, University of Bern, Bern (Switzerland); Galand, M. [Department of Physics, Imperial College London, London (United Kingdom); Henri, P.; Valliéres, X., E-mail: erik.vigren@irfu.se [Laboratoire de Physique et Chimie de l’Environnement et de l’Espace, Orleans (France)

    2016-09-01

    During 2015 January 9–11, at a heliocentric distance of ∼2.58–2.57 au, the ESA Rosetta spacecraft resided at a cometocentric distance of ∼28 km from the nucleus of comet 67P/Churyumov–Gerasimenko, sweeping the terminator at northern latitudes of 43°N–58°N. Measurements by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/Comet Pressure Sensor (ROSINA/COPS) provided neutral number densities. We have computed modeled electron number densities using the neutral number densities as input into a Field Free Chemistry Free model, assuming H{sub 2}O dominance and ion-electron pair formation by photoionization only. A good agreement (typically within 25%) is found between the modeled electron number densities and those observed from measurements by the Mutual Impedance Probe (RPC/MIP) and the Langmuir Probe (RPC/LAP), both being subsystems of the Rosetta Plasma Consortium. This indicates that ions along the nucleus-spacecraft line were strongly coupled to the neutrals, moving radially outward with about the same speed. Such a statement, we propose, can be further tested by observations of H{sub 3}O{sup +}/H{sub 2}O{sup +} number density ratios and associated comparisons with model results.

  7. Modelling and evaluating against the violent insider

    International Nuclear Information System (INIS)

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-01-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, the authors present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. The authors then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, the authors discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence

  8. Modelling and evaluating against the violent insider

    International Nuclear Information System (INIS)

    Fortney, D.S.; Al-Ayat, R.A.; Saleh, R.A.

    1991-07-01

    The violent insider threat poses a special challenge to facilities protecting special nuclear material from theft or diversion. These insiders could potentially behave as nonviolent insiders to deceitfully defeat certain safeguards elements and use violence to forcefully defeat hardware or personnel. While several vulnerability assessment tools are available to deal with the nonviolent insider, very limited effort has been directed to developing analysis tools for the violent threat. In this paper, we present an approach using the results of a vulnerability assessment for nonviolent insiders to evaluate certain violent insider scenarios. Since existing tools do not explicitly consider violent insiders, the approach is intended for experienced safeguards analysts and relies on the analyst to brainstorm possible violent actions, to assign detection probabilities, and to ensure consistency. We then discuss our efforts in developing an automated tool for assessing the vulnerability against those violent insiders who are willing to use force against barriers, but who are unwilling to kill or be killed. Specifically, we discuss our efforts in developing databases for violent insiders penetrating barriers, algorithms for considering the entry of contraband, and modelling issues in considering the use of violence

  9. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  10. Evaluation of economic and performance outcomes associated with the number of treatments after an initial diagnosis of bovine respiratory disease in commercial feeder cattle.

    Science.gov (United States)

    Cernicchiaro, Natalia; White, Brad J; Renter, David G; Babcock, Abram H

    2013-02-01

    To evaluate associations between economic and performance outcomes with the number of treatments after an initial diagnosis of bovine respiratory disease (BRD) in commercial feedlot cattle. 212,867 cattle arriving in a Midwestern feedlot between 2001 and 2006. An economic model was created to estimate net returns. Generalized linear mixed models were used to determine associations between the frequency of BRD treatments and other demographic variables with economic and performance outcomes. Net returns decreased with increasing number of treatments for BRD. However, the magnitude depended on the season during which cattle arrived at the feedlot, with significantly higher returns for cattle arriving during fall and summer than for cattle arriving during winter and spring. For fall arrivals, there were higher mean net returns for cattle that were never treated ($39.41) than for cattle treated once ($29.49), twice ($16.56), or ≥ 3 times (-$33.00). For summer arrivals, there were higher least squares mean net returns for cattle that were never treated ($31.83) than for cattle treated once ($20.22), twice ($6.37), or ≥ 3 times ($-42.56). Carcass traits pertaining to weight and quality grade were deemed responsible for differences in net returns among cattle receiving different numbers of treatments after an initial diagnosis of BRD. Differences in economic net returns and performance outcomes for feedlot cattle were determined on the basis of number of treatments after an initial diagnosis of BRD; the analysis accounted for the season of arrival, sex, and weight class.

  11. Baryon number and lepton universality violation in leptoquark and diquark models

    Directory of Open Access Journals (Sweden)

    Nima Assad

    2018-02-01

    Full Text Available We perform a systematic study of models involving leptoquarks and diquarks with masses well below the grand unification scale and demonstrate that a large class of them is excluded due to rapid proton decay. After singling out the few phenomenologically viable color triplet and sextet scenarios, we show that there exist only two leptoquark models which do not suffer from tree-level proton decay and which have the potential for explaining the recently discovered anomalies in B meson decays. Both of those models, however, contain dimension five operators contributing to proton decay and require a new symmetry forbidding them to emerge at a higher scale. This has a particularly nice realization for the model with the vector leptoquark (3,12/3, which points to a specific extension of the Standard Model, namely the Pati–Salam unification model, where this leptoquark naturally arises as the new gauge boson. We explore this possibility in light of recent B physics measurements. Finally, we analyze also a vector diquark model, discussing its LHC phenomenology and showing that it has nontrivial predictions for neutron–antineutron oscillation experiments.

  12. On Spatial Resolution in Habitat Models: Can Small-scale Forest Structure Explain Capercaillie Numbers?

    Directory of Open Access Journals (Sweden)

    Ilse Storch

    2002-06-01

    Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.

  13. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    OpenAIRE

    Sep?lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain5, Arnab; Clark, Taane G

    2013-01-01

    BACKGROUND: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poi...

  14. Airline service quality evaluation: A review on concepts and models

    OpenAIRE

    Navid Haghighat

    2017-01-01

    This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive crite...

  15. Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation

    Science.gov (United States)

    2018-01-01

    ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model...Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...although some minor changes may be needed. The program processes a GTRAJ output text file that contains results from 2 or more simulations , where each

  16. Evaluating subjective cognitive impairment in the adult epilepsy clinic: Effects of depression, number of antiepileptic medications, and seizure frequency.

    Science.gov (United States)

    Feldman, Lauren; Lapin, Brittany; Busch, Robyn M; Bautista, Jocelyn F

    2018-04-01

    Subjective cognitive complaints are a frequent concern of patients with epilepsy. The Aldenkamp-Baker Neuropsychological Assessment Schedule (ABNAS) is a patient-reported scale validated to measure adverse cognitive effects of antiepileptic drugs (AEDs). The goals of this study were to identify predictors of patient-reported cognitive dysfunction and to assess the relationship between subjective and objective cognitive impairment. The Cleveland Clinic Knowledge Program Data Registry was used to identify adult patients seen in outpatient epilepsy clinic from January to May 2015 and who completed the following scales: ABNAS for subjective cognitive impairment, Patient Health Questionnaire (PHQ-9) for depression, Generalized Anxiety Disorder 7-item (GAD-7) scale, Quality of Life in Epilepsy (QOLIE-10), and EuroQOL five dimensions questionnaire (EQ-5D) for health-related quality of life. Topiramate (TPM) was considered a high-risk medication for cognitive impairment. Patients were categorized into groups based on total ABNAS score: subjective cognitive impairment (ABNAS>15; N=270) and no subjective cognitive impairment (ABNAS≤15; N=400). Multivariable logistic regression models were constructed to identify independent predictors of subjective cognitive impairment. In a subset of patients who had neuropsychological testing within 6months of completing the ABNAS (N=60), Pearson correlations and multivariable logistic regression models, controlling for number of AEDs, depression, and anxiety, assessed the relationship between subjective cognitive impairment and objective cognitive performance on measures of intelligence, attention/working memory, verbal fluency, naming, processing speed, manual dexterity, visuomotor processing, and verbal memory. Forty percent of patients in the overall sample (N=270/670) reported cognitive impairment. The variables most strongly associated with subjective cognitive impairment were PHQ-9 score, number of AEDs, and seizure frequency. In

  17. Total cross sections of hadron interactions at high energies in low constituents number model

    International Nuclear Information System (INIS)

    Abramovskij, V.A.; Radchenko, N.V.

    2009-01-01

    We consider QCD hadrons interaction model in which gluons density is low in initial state wave function in rapidity space and real hadrons are produced from color strings decay. In this model behavior of total cross sections of pp, pp bar, π ± p, K ± p, γp, and γγ interactions is well described. The value of proton-proton total cross section at LHC energy is predicted

  18. UPTRANS: an incremental transport model with feedback for quick-response strategy evaluation

    CSIR Research Space (South Africa)

    Venter, C

    2009-07-01

    Full Text Available The paper describes the development of a prototype transport model to be used for high-level evaluation of a potentially large number of alternative land use-transport scenarios. It uses advanced logit modelling to capture travel behaviour change...

  19. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  20. In vivo evaluation of an anti-PSMA antibody conjugated with varying numbers of biotin molecules in a pretargeting protocol

    International Nuclear Information System (INIS)

    Wilbur, D.S.; Hamlin, D.K.; Quinn, J.; Vessella, R.L.

    2003-01-01

    An investigation has been conducted to determine the effect of varying the number of biotin molecules conjugated with an anti-PSMA antibody (mAb) as part of our studies to optimize biotinylated antibodies and radiolabeled streptavidin in pretargeting protocols for Targeted Radionuclide Therapy of prostate cancer. In the investigation, the anti-PSMA antibody 107-1A4 was biotinylated with varying amounts of biotinamidocaproate N-hydroxysuccinimide ester. This procedure resulted in obtaining 107-1A4 with 2.3, 4.5, and 6.8 biotin conjugated as measured by the standard HABA assay. The biotinylated 107-1A4 was radioiodinated and was evaluated in a pretargeting protocol in athymic mice bearing LNCaP human tumor xenografts. In the protocol, 50 μg biotinylated [ 125 I]107-1A4 was injected, followed 48h later by 25 μg of avidin for blood clearance, and 1h after that 20 μg of radiolabeled succinylated recombinant streptavidin ([ 13 1I]sSAv) was administered. The tumor localization and tissue distribution was evaluated at 24, 48, and 72h post [ 131 I]sSAv injection. With 2.3 biotin/mAb, an approximate 1:1 molar ratio (4-5 pmol/g) of sSAv/mAb was obtained at all three time points. With 4.5 biotin/mAb, a 1:1 ratio was observed at 24h, but approx. 2: 1 was observed at 48 and 72h pi. With 6.8 biotin/mAb, sSAv/mAb ratios of approximately 1.5:1; 2:1; and 3:1 were obtained at 24, 48, and 72h pi respectively. The amount of sSAv localized in the tumor was nearly the same (4-5 pmol/g) when 107-1A4 had 2.3 or 4.5 biotin conjugated, but decreased to 3-4.5 pmol/g with 6.8 biotin conjugated. Because the highest levels of co-localized sSAv was found with the lowest number of biotin conjugates, the observed differences in ratios of sSAv/mAb may be best explained as differences in internalization, and degradation of mAb and protease resistant sSAv. In duplicate experiments, similar results were obtained with biotinylated 107-1A4 F(ab') 2 , but not with an mAb to a non-internalizing antigen

  1. Modeling and designing of variable-period and variable-pole-number undulator

    Directory of Open Access Journals (Sweden)

    I. Davidyuk

    2016-02-01

    Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.

  2. Linear programming model for solution of matrix game with payoffs trapezoidal intuitionistic fuzzy number

    Directory of Open Access Journals (Sweden)

    Darunee Hunwisai

    2017-01-01

    Full Text Available In this work, we considered two-person zero-sum games with fuzzy payoffs and matrix games with payoffs of trapezoidal intuitionistic fuzzy numbers (TrIFNs. The concepts of TrIFNs and their arithmetic operations were used. The cut-set based method for matrix game with payoffs of TrIFNs was also considered. Compute the interval-type value of any alfa-constrategies by simplex method for linear programming. The proposed method is illustrated with a numerical example.

  3. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    Science.gov (United States)

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  4. Investigation of the Effects of the Number of Categories on Psychometric Properties According to Mokken Homogeneity Model

    Directory of Open Access Journals (Sweden)

    Asiye ŞENGÜL AVŞAR

    2018-03-01

    Full Text Available The aim of the research was to examine the effects of the number of categories for polytomous items on psychometric properties in a nonparametric item response theory (NIRT model. For the purpose of the study, data sets with two different sample sizes (100 and 500 that come from different sample distribution shapes (normal distribution, positively skewed distribution, and negatively skewed distribution, two different test lengths (10 items and 30 items, and three different number of categories (three, five, and seven were generated. The effects of the number of categories on psychometric properties of polytomous items were analyzed by Mokken Homogeneity Model (MHM under NIRT model. The research was designed as a basic research. In the generation and analysis of data sets, R Studio 3.4.0 software was used. For analysis conducted with MHM, Mokken package was used in R Studio. According to scaling with MHM, specific pattern of item fit to MHM with changing the number of categories was not observed. In general, it was found that the number of categories has no effect on reliability estimate. It was determined that tests have weak fit to MHM under test conditions in the research.

  5. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.

    Science.gov (United States)

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  6. Nuclear safety culture evaluation model based on SSE-CMM

    International Nuclear Information System (INIS)

    Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Peng Guojian

    2012-01-01

    Safety culture, which is of great significance to establish safety objectives, characterizes level of enterprise safety production and development. Traditional safety culture evaluation models emphasis on thinking and behavior of individual and organization, and pay attention to evaluation results while ignore process. Moreover, determining evaluation indicators lacks objective evidence. A novel multidimensional safety culture evaluation model, which has scientific and completeness, is addressed by building an preliminary mapping between safety culture and SSE-CMM's (Systems Security Engineering Capability Maturity Model) process area and generic practice. The model focuses on enterprise system security engineering process evaluation and provides new ideas and scientific evidences for the study of safety culture. (authors)

  7. Presenting an Evaluation Model for the Cancer Registry Software.

    Science.gov (United States)

    Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh

    2017-12-01

    As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.

  8. Separate μ- and e-lepton numbers non-conservation in the Weinberg model with two higgs doublets

    International Nuclear Information System (INIS)

    Branco, G.C.

    1977-03-01

    It is shown that in the Weinberg-Salam model with two Higgs doublets, one is naturally led to the violation of separate μ and e-lepton numbers. The branching ratio for μ → eγ is found to be comparable to the present experimental limit. (orig.) [de

  9. Magnetic Helicity Estimations in Models and Observations of the Solar Magnetic Field. III. Twist Number Method

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y. [School of Astronomy and Space Science and Key Laboratory of Modern Astronomy and Astrophysics in Ministry of Education, Nanjing University, Nanjing 210023 (China); Pariat, E.; Moraitis, K. [LESIA, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Université, UPMC Univ. Paris 06, Univ. Paris Diderot, Sorbonne Paris Cité, F-92190 Meudon (France); Valori, G. [University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Anfinogentov, S. [Institute of Solar-Terrestrial Physics SB RAS 664033, Irkutsk, P.O. box 291, Lermontov Street, 126a (Russian Federation); Chen, F. [Max-Plank-Institut für Sonnensystemforschung, D-37077 Göttingen (Germany); Georgoulis, M. K. [Research Center for Astronomy and Applied Mathematics of the Academy of Athens, 4 Soranou Efesiou Street, 11527 Athens (Greece); Liu, Y. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Thalmann, J. K. [Institute of Physics, Univeristy of Graz, Universitätsplatz 5/II, A-8010 Graz (Austria); Yang, S., E-mail: guoyang@nju.edu.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2017-05-01

    We study the writhe, twist, and magnetic helicity of different magnetic flux ropes, based on models of the solar coronal magnetic field structure. These include an analytical force-free Titov–Démoulin equilibrium solution, non-force-free magnetohydrodynamic simulations, and nonlinear force-free magnetic field models. The geometrical boundary of the magnetic flux rope is determined by the quasi-separatrix layer and the bottom surface, and the axis curve of the flux rope is determined by its overall orientation. The twist is computed by the Berger–Prior formula, which is suitable for arbitrary geometry and both force-free and non-force-free models. The magnetic helicity is estimated by the twist multiplied by the square of the axial magnetic flux. We compare the obtained values with those derived by a finite volume helicity estimation method. We find that the magnetic helicity obtained with the twist method agrees with the helicity carried by the purely current-carrying part of the field within uncertainties for most test cases. It is also found that the current-carrying part of the model field is relatively significant at the very location of the magnetic flux rope. This qualitatively explains the agreement between the magnetic helicity computed by the twist method and the helicity contributed purely by the current-carrying magnetic field.

  10. Modeling a support system for the evaluator

    International Nuclear Information System (INIS)

    Lozano Lima, B.; Ilizastegui Perez, F; Barnet Izquierdo, B.

    1998-01-01

    This work gives evaluators a tool they can employ to give more soundness to their review of operational limits and conditions. The system will establish the most adequate method to carry out the evaluation, as well as to evaluate the basis for technical operational specifications. It also includes the attainment of alternative questions to be supplied to the operating entity to support it in decision-making activities

  11. A new model for Assessment and Optimization of Number of Spare Transformers and their Locations in Distribution Systems

    Directory of Open Access Journals (Sweden)

    M Sedaghati

    2015-12-01

    Full Text Available In this paper, a new model has been presented to determine the number of spare transformers and their locations for distribution stations. The number of spare transformers must be so that they need minimum investment. Furthermore, they must be sufficient for replacing with transformers that have been damaged. For this reason, in this paper a new purpose function has been presented to maximize profit in distribution company’s budgeting and planning. For determining the number of spares that must be available in a stock room, this paper considers the number of spares and transformer’s fault at the same time. The number of spare transformers is determined so that at least one spare transformer will be available for replacing with the failed transformers. This paper considers time required for purchasing or repairing a failed transformer to determine the number of required spare transformers. Furthermore, whatever the number of spare equipment are increased, cost of maintenance will be increased, so an economic comparison must be done between reduced costs from reducing of outage time and increased costs from spare transformers existence.

  12. Evaluation of long-range transport models in NOVANA

    International Nuclear Information System (INIS)

    Frohn, L.M.; Brandt, J.; Christensen, J.H.; Geels, C.; Hertel, O.; Skjoeth, C.A.; Ellemann, T.

    2007-01-01

    The Lagrangian model ACDEP which has been applied in BOP/-NOVA/NOVANA during the period 1995-2004, has been replaced by the more modern Eulerian model DEHM. The new model has a number of advantages, such as a better description of the three-dimensional atmospheric transport, a larger domain, a possibility for high spatial resolution in the calculations and a more detailed description of photochemical processes and dry deposition. In advance of the replacement, the results of the two models have been compared and evaluated using European and Danish measurements. Calculations have been performed with both models applying the same meteorological and emission input, for Europe for the year 2000 as well as for Denmark for the period 2000-2003. The European measurements applied in the present evaluation are obtained through EMEP. Using these measurements DEHM and ACDEP have been compared with respect to daily and yearly mean concentrations of ammonia (NH 3 ), ammonium (NH 4 + ), the sum of NH 3 and NH 4 + (SNH), nitric acid (HNO 3 ), nitrate (NO 3 - ), the sum of HNO 3 and NO 3 - (SNO 3 ), nitrogen dioxide (NO 2 ), ozone (O 3 ), sulphur dioxide (SO 2 ) and sulphate (SO 4 2- ) as well as the hourly mean and daily maximum concentrations of O 3 . Furthermore the daily and yearly total values of precipitation and wet deposition of NH 4 + , NO 3 - and SO 4 2- have been compared for the two models. The statistical parameters applied in the comparison are correlation, bias and fractional bias. The result of the comparison with the EMEP data is, that DEHM achieves better correlation coefficients for all chemical parameters (16 parameters in total) when the daily values are analysed, and for 15 out of 16 parameters when yearly values are taken into account. With respect to the fractional bias, the results obtained with DEHM are better than the corresponding results obtained with ACDEP for 11 out of 16 chemical parameters. In general the performance of the DEHM model is at least

  13. Model and scenario variations in predicted number of generations of Spodoptera litura Fab. on peanut during future climate change scenario.

    Directory of Open Access Journals (Sweden)

    Mathukumalli Srinivasa Rao

    Full Text Available The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM of future data on daily maximum (T.max, minimum (T.min air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1. This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF -2020, Distant future (DF-2050 and Very Distant future (VDF-2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1-2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18-22% over baseline. Analysis of variance (ANOVA was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%, model (1.74% and scenario (0.74%. The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods.

  14. Evaluation of the perceptual grouping parameter in the CTVA model

    Directory of Open Access Journals (Sweden)

    Manuel Cortijo

    2005-01-01

    Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.

  15. Statistical models of shape optimisation and evaluation

    CERN Document Server

    Davies, Rhodri; Taylor, Chris

    2014-01-01

    Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.

  16. Cerebellar plasticity and motor learning deficits in a copy-number variation mouse model of autism.

    Science.gov (United States)

    Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian

    2014-11-24

    A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behaviour and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behaviour deficits. We find that in patDp/+ mice delay eyeblink conditioning--a form of cerebellum-dependent motor learning--is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fibre-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibres--a model for activity-dependent synaptic pruning--is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism.

  17. Naturalness and lepton number/flavor violation in inverse seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Physics Division, National Center for Theoretical Sciences,101, Section 2 Kuang Fu Road, Hsinchu, 300 Taiwan (China); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Department of Physics, Faculty of Science, Hokkaido University,Kita 9 Nishi 8, Kita-ku, Sapporo, Hokkaido (Japan)

    2016-11-02

    We introduce three right-handed neutrinos and three sterile neutrinos, and consider an inverse seesaw mechanism for neutrino mass generation. From naturalness point of view, their Majorana masses should be small, while it induces a large neutrino Yukawa coupling. Then, a neutrinoless double beta decay rate can be enhanced, and a sizable Higgs mass correction is inevitable. We find that the enhancement rate can be more than ten times compared with a standard prediction from light neutrino contribution alone, and an analytic form of heavy neutrino contributions to the Higgs mass correction. In addition, we numerically analyze the model, and find almost all parameter space of the model can be complementarily searched by future experiments of neutrinoless double beta decay and μ→e conversion.

  18. Baryon number generation in a flipped SU(5) x U(1) model

    International Nuclear Information System (INIS)

    Campbell, B.; Hagelin, J.; Nanopoulos, D.V.; Olive, K.A.

    1988-01-01

    We consider the possibilities for generating a baryon asymmetry in the early universe in a flipped SU(5) x U(1) model inspired by the superstring. Depending on the temperature of the radiation background after inflation we can distinguish between two scenarios for baryogenesis: (1) After reheating the original SU(5) x U(1) symmetry is restored, or there was no inflation at all; (2) reheating after inflation is rather weak and SU(5) x U(1) is broken. In either case the asymmetry is generated by the out-of-equilibrium decays of a massive SU(3) x SU(2) x U(1) singlet field φ m . In the flipped SU(5) x U(1) model, gauge symmetry breaking is triggered by strong coupling phenomena, and is in general accompanied by the production of entropy. We examine constraints on the reheating temperature and the strong coupling scale in each of the scenarios. (orig.)

  19. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Winter, Anatol; Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels......) including estimation of their "petrophysical" properties (e.g. absolute permeability). 3) Mathematical modelling and computer studies of multiphase transport through pore space using mathematical network models. 4) Investigation of link between pore-scale and macroscopic recovery mechanisms....

  20. The Use of AMET and Automated Scripts for Model Evaluation

    Science.gov (United States)

    The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...

  1. Modelling of diesel spray flame under engine-like conditions using an accelerated eulerian stochastic fields method: A convergence study of the number of stochastic fields

    DEFF Research Database (Denmark)

    Pang, Kar Mun; Jangi, Mehdi; Bai, X.-S.

    generated similar results. The principal motivation for ESF compared to Lagrangian particle based PDF is the relative ease of implementation of the former into Eulerian computational fluid dynamics(CFD) codes [5]. Several works have attempted to implement the ESF model for the simulations of diesel spray......The use of transported Probability Density Function(PDF) methods allows a single model to compute the autoignition, premixed mode and diffusion flame of diesel combustion under engine-like conditions [1,2]. The Lagrangian particle based transported PDF models have been validated across a wide range...... combustion under engine-like conditions.The current work aims to further evaluate the performance of the ESF model in this application, with an emphasis on examining the convergence of the number of stochastic fields, nsf. Five test conditions, covering both the conventional diesel combustion and low...

  2. Socioeconophysics:. Opinion Dynamics for Number of Transactions and Price, a Trader Based Model

    Science.gov (United States)

    Tuncay, Çağlar

    Involving effects of media, opinion leader and other agents on the opinion of individuals of market society, a trader based model is developed and utilized to simulate price via supply and demand. Pronounced effects are considered with several weights and some personal differences between traders are taken into account. Resulting time series and probabilty distribution function involving a power law for price come out similar to the real ones.

  3. Application of low Reynolds number k-{epsilon} turbulence models to the study of turbulent wall jets

    Energy Technology Data Exchange (ETDEWEB)

    Kechiche, Jamel; Mhiri, Hatem [Laboratoire de Mecanique des Fluides et Thermique, Ecole Nationale d' Ingenieurs de Monastir, route de Ouardanine, 5000, Monastir (Tunisia); Le Palec, Georges; Bournot, Philippe [Institut de Mecanique de Marseille, 60, rue Joliot-Curie, Technopole de Chateau-Gombert, 13453 cedex 13, Marseille (France)

    2004-02-01

    In this work, we use closure models called ''low Reynolds number k-{epsilon} models'', which are self-adapting ones using different damping functions, in order to explore the computed behavior of a turbulent plane two-dimensional wall jets. In this study, the jet may be either isothermal or submitted to various wall boundary conditions (uniform temperature or a uniform heat flux) in forced convection regime. A finite difference method, using a staggered grid, is employed to solve the coupled governing equations with the inlet and the boundary conditions. The predictions of the various low Reynolds number k-{epsilon} models with standard or modified C{sub {mu}} adopted in this work were presented and compared with measurements and numerical results found in the literature. (authors)

  4. Evaluation of global climate models for Indian monsoon climatology

    International Nuclear Information System (INIS)

    Kodra, Evan; Ganguly, Auroop R; Ghosh, Subimal

    2012-01-01

    The viability of global climate models for forecasting the Indian monsoon is explored. Evaluation and intercomparison of model skills are employed to assess the reliability of individual models and to guide model selection strategies. Two dominant and unique patterns of Indian monsoon climatology are trends in maximum temperature and periodicity in total rainfall observed after 30 yr averaging over India. An examination of seven models and their ensembles reveals that no single model or model selection strategy outperforms the rest. The single-best model for the periodicity of Indian monsoon rainfall is the only model that captures a low-frequency natural climate oscillator thought to dictate the periodicity. The trend in maximum temperature, which most models are thought to handle relatively better, is best captured through a multimodel average compared to individual models. The results suggest a need to carefully evaluate individual models and model combinations, in addition to physical drivers where possible, for regional projections from global climate models. (letter)

  5. Issues in Value-at-Risk Modeling and Evaluation

    NARCIS (Netherlands)

    J. Daníelsson (Jón); C.G. de Vries (Casper); B.N. Jorgensen (Bjørn); P.F. Christoffersen (Peter); F.X. Diebold (Francis); T. Schuermann (Til); J.A. Lopez (Jose); B. Hirtle (Beverly)

    1998-01-01

    textabstractDiscusses the issues in value-at-risk modeling and evaluation. Value of value at risk; Horizon problems and extreme events in financial risk management; Methods of evaluating value-at-risk estimates.

  6. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    Science.gov (United States)

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  7. Autism spectrum disorder model mice: Focus on copy number variation and epigenetics.

    Science.gov (United States)

    Nakai, Nobuhiro; Otsuka, Susumu; Myung, Jihwan; Takumi, Toru

    2015-10-01

    Autism spectrum disorder (ASD) is gathering concerns in socially developed countries. ASD is a neuropsychiatric disorder of genetic origin with high prevalence of 1%-2%. The patients with ASD characteristically show impaired social skills. Today, many genetic studies identify numerous susceptible genes and genetic loci associated with ASD. Although some genetic factors can lead to abnormal brain function linked to ASD phenotypes, the pathogenic mechanism of ASD is still unclear. Here, we discuss a new mouse model for ASD as an advanced tool to understand the mechanism of ASD.

  8. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Czech Academy of Sciences Publication Activity Database

    Chlumecký, M.; Buchtele, Josef; Richta, K.

    2017-01-01

    Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.483, year: 2016 https://ac.els-cdn.com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27&acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705

  9. Cerebellar Plasticity and Motor Learning Deficits in a Copy Number Variation Mouse Model of Autism

    Science.gov (United States)

    Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian

    2014-01-01

    A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behavior and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behavior deficits. We find that in patDp/+ mice delay eyeblink conditioning—a form of cerebellum-dependent motor learning—is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fiber-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibers—a model for activity-dependent synaptic pruning—is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism. PMID:25418414

  10. Models of economic geography: dynamics, estimation and policy evaluation

    OpenAIRE

    Knaap, Thijs

    2004-01-01

    In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....

  11. Low-Rynolds number k-ε turbulence model for calculation of fast-reactor-channel flows

    International Nuclear Information System (INIS)

    Mikhin, V.I.

    2000-01-01

    For calculating the turbulent flows in the complex geometry channels typical for the nuclear reactor installation elements the low-Reynolds-number k-ε turbulence model with the model functions not containing the spatial coordinate like y + is proposed. Such spatial coordinate is usually used for modeling the turbulence near the wall correctly. The model completed on the developed flow of the non-viscous incompressible liquid in the plane channel correctly describes the transition from the laminar regime to the turbulent one. The calculated skin friction coefficients obey the well-known Dean and Zarbi - Reynolds laws. The mean velocity distributions are close to that obtained from the empirical three-layer Karman model. (author)

  12. MARKET EVALUATION MODEL: TOOL FORBUSINESS DECISIONS

    OpenAIRE

    Porlles Loarte, José; Yenque Dedios, Julio; Lavado Soto, Aurelio

    2014-01-01

    In the present work the concepts of potential market and global market are analyzed as the basis for strategic decisions of market with long term perspectives, when the implantation of a business in certain geographic area is evaluated. On this conceptual frame, the methodological tool is proposed to evaluate a commercial decision, for which it is taken as reference the case from the brewing industry in Peru, considering that this industry faces in the region entrepreneurial reorderings withi...

  13. A Regional Climate Model Evaluation System

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...

  14. Repeated rat-forced swim test: reducing the number of animals to evaluate gradual effects of antidepressants.

    Science.gov (United States)

    Mezadri, T J; Batista, G M; Portes, A C; Marino-Neto, J; Lino-de-Oliveira, C

    2011-02-15

    The forced swim test (FST) is a pre-clinical test to short and long term treatment with antidepressant drugs (ADT), which requires between-subject designs. Herein a modified protocol of the FST using within-subject design (repeated rat-FST) was evaluated. Male Wistar rats were submitted to 15 min of swimming (Day 1: pretest) followed by three subsequent 5 min-swimming tests one week apart (Day 2: test, Day 7: retest 1, Day 14: retest 2). To determine the temporal and factorial characteristics of the variables scored in the repeated rat-FST, the protocol was carried out in untreated animals (E1). To validate the method, daily injections of Fluoxetine (FLX, 2.5mg/kg, i.p.) or saline were given over a 2-week period (E2). Tests and retests have been videotaped for further register of the latency, frequency and duration of behaviors. Over retesting the latency to immobility decreased whereas duration of immobility tended to increase. Factorial analysis revealed that the test, the retest 1 as well as the retest 2 have variables suitable to detection of antidepressant-like effects of ADT. Compared to saline, FLX chronically administrated reduced duration of immobility whereas increased duration of swimming in retest 2. The data suggest that repeated rat-FST detected the gradual increase in the efficacy of low doses of FLX over time. Therefore, repeated rat-FST seemed suitable to detect short and long term effects of selective serotonin reuptake inhibitors, or other ADT, thus reducing the number of animals used in the screenings of this type of compounds. © 2010 Elsevier B.V. All rights reserved.

  15. Increased numbers of orexin/hypocretin neurons in a genetic rat depression model

    DEFF Research Database (Denmark)

    Mikrouli, Elli; Wörtwein, Gitta; Soylu, Rana

    2011-01-01

    The Flinders Sensitive Line (FSL) rat is a genetic animal model of depression that displays characteristics similar to those of depressed patients including lower body weight, decreased appetite and reduced REM sleep latency. Hypothalamic neuropeptides such as orexin/hypocretin, melanin......-concentrating hormone (MCH) and cocaine and amphetamine regulated transcript (CART), that are involved in the regulation of both energy metabolism and sleep, have recently been implicated also in depression. We therefore hypothesized that alterations in these neuropeptide systems may play a role in the development...... of the FSL phenotype with both depressive like behavior, metabolic abnormalities and sleep disturbances. In this study, we first confirmed that the FSL rats displayed increased immobility in the Porsolt forced swim test compared to their control strain, the Flinders Resistant Line (FRL), which is indicative...

  16. QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2016-01-01

    Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.

  17. An Integrated Decision-Making Model for Transformer Condition Assessment Using Game Theory and Modified Evidence Combination Extended by D Numbers

    Directory of Open Access Journals (Sweden)

    Lingjie Sun

    2016-08-01

    Full Text Available The power transformer is one of the most critical and expensive components for the stable operation of the power system. Hence, how to obtain the health condition of transformer is of great importance for power utilities. Multi-attribute decision-making (MADM, due to its ability of solving multi-source information problems, has become a quite effective tool to evaluate the health condition of transformers. Currently, the analytic hierarchy process (AHP and Dempster–Shafer theory are two popular methods to solve MADM problems; however, these techniques rarely consider one-sidedness of the single weighting method and the exclusiveness hypothesis of the Dempster–Shafer theory. To overcome these limitations, this paper introduces a novel decision-making model, which integrates the merits of fuzzy set theory, game theory and modified evidence combination extended by D numbers, to evaluate the health condition of transformers. A four-level framework, which includes three factors and seventeen sub-factors, is put forward to facilitate the evaluation model. The model points out the following: First, the fuzzy set theory is employed to obtain the original basic probability assignments for all indices. Second, the subjective and objective weights of indices, which are calculated by fuzzy AHP and entropy weight, respectively, are integrated to generate the comprehensive weights based on game theory. Finally, based on the above two steps, the modified evidence combination extended by D numbers, which avoids the limitation of the exclusiveness hypothesis in the application of Dempster–Shafer theory, is proposed to obtain the final assessment results of transformers. Case studies are given to demonstrate the proposed modeling process. The results show the effectiveness and engineering practicability of the model in transformer condition assessment.

  18. Time-dependent occupation numbers in reduced-density-matrix-functional theory: Application to an interacting Landau-Zener model

    International Nuclear Information System (INIS)

    Requist, Ryan; Pankratov, Oleg

    2011-01-01

    We prove that if the two-body terms in the equation of motion for the one-body reduced density matrix are approximated by ground-state functionals, the eigenvalues of the one-body reduced density matrix (occupation numbers) remain constant in time. This deficiency is related to the inability of such an approximation to account for relative phases in the two-body reduced density matrix. We derive an exact differential equation giving the functional dependence of these phases in an interacting Landau-Zener model and study their behavior in short- and long-time regimes. The phases undergo resonances whenever the occupation numbers approach the boundaries of the interval [0,1]. In the long-time regime, the occupation numbers display correlation-induced oscillations and the memory dependence of the functionals assumes a simple form.

  19. The percentage of macrophage numbers in rat model of sciatic nerve crush injury

    Directory of Open Access Journals (Sweden)

    Satrio Wicaksono

    2016-02-01

    Full Text Available ABSTRACT Excessive accumulation of macrophages in sciatic nerve fascicles inhibits regeneration of peripheral nerves. The aim of this study is to determine the percentage of the macrophages inside and outside of the fascicles at the proximal, at the site of injury and at the distal segment of rat model of sciatic nerve crush injury. Thirty male 3 months age Wistar rats of 200-230 g were divided into sham-operation group and crush injury group. Termination was performed on day 3, 7, and 14 after crush injury. Immunohistochemical examination was done using anti CD68 antibody. Counting of immunopositive and immunonegative cells was done on three representative fields for extrafascicular and intrafascicular area of proximal, injury and distal segments. The data was presented as percentage of immunopositive cells. The percentage of the macrophages was significantly increased in crush injury group compared to the sham-operated group in all segments of the peripheral nerves. While the percentage of macrophages outside fascicle in all segments of sciatic nerve and within the fascicle in the proximal segment reached its peak on day 3, the percentage of macrophages within the fascicles at the site of injury and distal segments reached the peak later at day 7. In conclusions, accumulation of macrophages outside the nerve fascicles occurs at the beginning of the injury, and then followed later by the accumulation of macrophages within nerve fascicles

  20. Biomechanical modelling and evaluation of construction jobs for performance improvement.

    Science.gov (United States)

    Parida, Ratri; Ray, Pradip Kumar

    2012-01-01

    Occupational risk factors, such as awkward posture, repetition, lack of rest, insufficient illumination and heavy workload related to construction-related MMH activities may cause musculoskeletal disorders and poor performance of the workers, ergonomic design of construction worksystems was a critical need for improving their health and safety wherein a dynamic biomechanical models were required to be empirically developed and tested at a construction site of Tata Steel, the largest steel making company of India in private sector. In this study, a comprehensive framework is proposed for biomechanical evaluation of shovelling and grinding under diverse work environments. The benefit of such an analysis lies in its usefulness in setting guidelines for designing such jobs with minimization of risks of musculoskeletal disorders (MSDs) and enhancing correct methods of carrying out the jobs leading to reduced fatigue and physical stress. Data based on direct observations and videography were collected for the shovellers and grinders over a number of workcycles. Compressive forces and moments for a number of segments and joints are computed with respect to joint flexion and extension. The results indicate that moments and compressive forces at L5/S1 link are significant for shovellers while moments at elbow and wrist are significant for grinders.

  1. The design and implementation of an operational model evaluation system. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K.T.

    1995-06-01

    The complete evaluation of an atmospheric transport and diffusion model typically includes a study of the model`s operational performance. Such a study very often attempts to compare the model`s calculations of an atmospheric pollutant`s temporal and spatial distribution with field experiment measurements. However, these comparisons tend to use data from a small number of experiments and are very often limited to producing the commonly quoted statistics based on the differences between model calculations and the experimental measurements (fractional bias, fractional scatter, etc.). This paper presents initial efforts to develop a model evaluation system geared for both the objective statistical analysis and the subjective visualization of the interrelationships between a model`s calculations and the appropriate field measurement data.

  2. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  3. Phase relations in a forced turbulent boundary layer: implications for modelling of high Reynolds number wall turbulence.

    Science.gov (United States)

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2017-03-13

    Phase relations between specific scales in a turbulent boundary layer are studied here by highlighting the associated nonlinear scale interactions in the flow. This is achieved through an experimental technique that allows for targeted forcing of the flow through the use of a dynamic wall perturbation. Two distinct large-scale modes with well-defined spatial and temporal wavenumbers were simultaneously forced in the boundary layer, and the resulting nonlinear response from their direct interactions was isolated from the turbulence signal for the study. This approach advances the traditional studies of large- and small-scale interactions in wall turbulence by focusing on the direct interactions between scales with triadic wavenumber consistency. The results are discussed in the context of modelling high Reynolds number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Influence of Discussion Rating in Cooperative Learning Type Numbered Head Together on Learning Results Students VII MTSN Model Padang

    Science.gov (United States)

    Sasmita, E.; Edriati, S.; Yunita, A.

    2018-04-01

    Related to the math score of the first semester in class at seventh grade of MTSN Model Padang which much the score that low (less than KKM). It because of the students who feel less involved in learning process because the teacher don't do assessment the discussions. The solution of the problem is discussion assessment in Cooperative Learning Model type Numbered Head Together. This study aims to determine whether the discussion assessment in NHT effect on student learning outcomes of class VII MTsN Model Padang. The instrument used in this study is discussion assessment and final tests. The data analysis technique used is the simple linear regression analysis. Hypothesis test results Fcount greater than the value of Ftable then the hypothesis in this study received. So it concluded that the assessment of the discussion in NHT effect on student learning outcomes of class VII MTsN Model Padang.

  5. Bayesian Nonparametric Hidden Markov Models with application to the analysis of copy-number-variation in mammalian genomes.

    Science.gov (United States)

    Yau, C; Papaspiliopoulos, O; Roberts, G O; Holmes, C

    2011-01-01

    We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.

  6. The path-integral analysis of an associative memory model storing an infinite number of finite limit cycles

    International Nuclear Information System (INIS)

    Mimura, Kazushi; Kawamura, Masaki; Okada, Masato

    2004-01-01

    An exact solution of the transient dynamics of an associative memory model storing an infinite number of limit cycles with l finite steps is shown by means of the path-integral analysis. Assuming the Maxwell construction ansatz, we have succeeded in deriving the stationary state equations of the order parameters from the macroscopic recursive equations with respect to the finite-step sequence processing model which has retarded self-interactions. We have also derived the stationary state equations by means of the signal-to-noise analysis (SCSNA). The signal-to-noise analysis must assume that crosstalk noise of an input to spins obeys a Gaussian distribution. On the other hand, the path-integral method does not require such a Gaussian approximation of crosstalk noise. We have found that both the signal-to-noise analysis and the path-integral analysis give completely the same result with respect to the stationary state in the case where the dynamics is deterministic, when we assume the Maxwell construction ansatz. We have shown the dependence of the storage capacity (α c ) on the number of patterns per one limit cycle (l). At l = 1, the storage capacity is α c = 0.138 as in the Hopfield model. The storage capacity monotonically increases with the number of steps, and converges to α c = 0.269 at l ≅ 10. The original properties of the finite-step sequence processing model appear as long as the number of steps of the limit cycle has order l = O(1)

  7. evaluation of models for assessing groundwater vulnerability

    African Journals Online (AJOL)

    DR. AMINU

    applied models for groundwater vulnerability assessment mapping. The appraoches .... The overall 'pollution potential' or DRASTIC index is established by applying the formula: DRASTIC Index: ... affected by the structure of the soil surface.

  8. Models for Evaluating and Improving Architecture Competence

    National Research Council Canada - National Science Library

    Bass, Len; Clements, Paul; Kazman, Rick; Klein, Mark

    2008-01-01

    ... producing high-quality architectures. This report lays out the basic concepts of software architecture competence and describes four models for explaining, measuring, and improving the architecture competence of an individual...

  9. The CREATIVE Decontamination Performance Evaluation Model

    National Research Council Canada - National Science Library

    Shelly, Erin E

    2008-01-01

    The project objective is to develop a semi-empirical, deterministic model to characterize and predict laboratory-scale decontaminant efficacy and hazards for a range of: chemical agents (current focus on HD...

  10. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  11. Inclusive integral evaluation for mammograms using the hierarchical fuzzy integral (HFI) model

    International Nuclear Information System (INIS)

    Amano, Takashi; Yamashita, Kazuya; Arao, Shinichi; Kitayama, Akira; Hayashi, Akiko; Suemori, Shinji; Ohkura, Yasuhiko

    2000-01-01

    Physical factors (physically evaluated values) and psychological factors (fuzzy measurements) of breast x-ray images were comprehensively evaluated by applying breast x-ray images to an extended stratum-type fuzzy integrating model. In addition, x-ray images were evaluated collectively by integrating the quality (sharpness, graininess, and contrast) of x-ray images and three representative shadows (fibrosis, calcification, tumor) in the breast x-ray images. We selected the most appropriate system for radiography of the breast from three kinds of intensifying screens and film systems for evaluation by this method and investigated the relationship between the breast x-ray images and noise equivalent quantum number, which is called the overall physical evaluation method, and between the breast x-ray images and psychological evaluation by a visual system with a stratum-type fuzzy integrating model. We obtained a linear relationship between the breast x-ray image and noise-equivalent quantum number, and linearity between the breast x-ray image and psychological evaluation by the visual system. Therefore, the determination of fuzzy measurement, which is a scale for fuzzy evaluation of psychological factors of the observer, and physically evaluated values with a stratum-type fuzzy integrating model enabled us to make a comprehensive evaluation of x-ray images that included both psychological and physical aspects. (author)

  12. MIRAGE: Model description and evaluation of aerosols and trace gases

    Science.gov (United States)

    Easter, Richard C.; Ghan, Steven J.; Zhang, Yang; Saylor, Rick D.; Chapman, Elaine G.; Laulainen, Nels S.; Abdul-Razzak, Hayder; Leung, L. Ruby; Bian, Xindi; Zaveri, Rahul A.

    2004-10-01

    The Model for Integrated Research on Atmospheric Global Exchanges (MIRAGE) modeling system, designed to study the impacts of anthropogenic aerosols on the global environment, is described. MIRAGE consists of a chemical transport model coupled online with a global climate model. The chemical transport model simulates trace gases, aerosol number, and aerosol chemical component mass (sulfate, methane sulfonic acid (MSA), organic matter, black carbon (BC), sea salt, and mineral dust) for four aerosol modes (Aitken, accumulation, coarse sea salt, and coarse mineral dust) using the modal aerosol dynamics approach. Cloud-phase and interstitial aerosol are predicted separately. The climate model, based on Community Climate Model, Version 2 (CCM2), has physically based treatments of aerosol direct and indirect forcing. Stratiform cloud water and droplet number are simulated using a bulk microphysics parameterization that includes aerosol activation. Aerosol and trace gas species simulated by MIRAGE are presented and evaluated using surface and aircraft measurements. Surface-level SO2 in North American and European source regions is higher than observed. SO2 above the boundary layer is in better agreement with observations, and surface-level SO2 at marine locations is somewhat lower than observed. Comparison with other models suggests insufficient SO2 dry deposition; increasing the deposition velocity improves simulated SO2. Surface-level sulfate in North American and European source regions is in good agreement with observations, although the seasonal cycle in Europe is stronger than observed. Surface-level sulfate at high-latitude and marine locations, and sulfate above the boundary layer, are higher than observed. This is attributed primarily to insufficient wet removal; increasing the wet removal improves simulated sulfate at remote locations and aloft. Because of the high sulfate bias, radiative forcing estimates for anthropogenic sulfur given in 2001 by S. J. Ghan and

  13. Evaluation model development for sprinkler irrigation uniformity ...

    African Journals Online (AJOL)

    use

    Sprinkle and trickle irrigation. The. Blackburn Press, New Jersey, USA. Li JS, Rao MJ (1999). Evaluation method of sprinkler irrigation nonuniformity. Trans. CSAE. 15(4): 78-82. Lin Z, Merkley GP (2011). Relationships between common irrigation application uniformity indicators. Irrig Sci. Online First™, 27 January. 2011.

  14. Evaluation model development for sprinkler irrigation uniformity ...

    African Journals Online (AJOL)

    A new evaluation method with accompanying software was developed to precisely calculate uniformity from catch-can test data, assuming sprinkler distribution data to be a continuous variable. Two interpolation steps are required to compute unknown water application depths at grid distribution points from radial ...

  15. Generalization techniques to reduce the number of volume elements for terrain effect calculations in fully analytical gravitational modelling

    Science.gov (United States)

    Benedek, Judit; Papp, Gábor; Kalmár, János

    2018-04-01

    Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable

  16. Prospective study to evaluate the number and the location of biopsies in rapid urease test for diagnosis of Helicobacter Pylori

    Directory of Open Access Journals (Sweden)

    Antoine Abou Rached

    2017-11-01

    Full Text Available Helicobacter pylori (H. pylori can cause a wide variety of illnesses such as peptic ulcer disease, gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT lymphoma. The diagnosis and eradication of H. pylori are crucial. The diagnosis of H. pylori is usually based on the rapid urease test (RUT and gastric antral biopsy for histology. The aim of this study is to evaluate the numbers of needed biopsies and their location (antrum/fundus to obtain optimal result for the diagnosis of H. pylori. Three hundred fifty consecutive patients were recruited, 210 fulfill the inclusion criteria and had nine gastric biopsies for the detection of H. pylori infection: two antral for the first RUT (RUT1, one antral and one fundic for the second (RUT2, one antral for the third (RUT3 and two antral with two fundic for histology (HES, Giemsa, PAS. The reading of the 3 types of RUT was performed at 1 hour, 3 hours and 24 hours and biopsies were read by two experienced pathologists not informed about the result of RUT. Results of RUT were considered positive if H. pylori was found on histology of at least one biopsy. The RUT1 at 1h, 3h and 24h has a sensitivity of 72%, 82% and 89% and a specificity of 100%, 99% and 87% respectively. The positive predictive value (PPV was 100%, 99% and 85% respectively and the negative predictive value (NPV of 81%, 87% and 90%. The RUT2 at 1h, 3h and 24h, respectively, had a sensitivity of 86%, 87% and 91% and a specificity of 99%, 97% and 90%. The PPV was 99%, 96% and 88% and NPV of 89%, 90%, 94%. The RUT3 at 1h, 3h and 24h, respectively, had a sensitivity of 70%, 74% and 84% and a specificity of 99%, 99% and 94%. The PPV was 99%, 99% and 92% and NPV of 79%, 81% and 87%. The best sensitivity and specificity were obtained for RUT1 read at 3h, for RUT2 read 1h and 3h, and the RUT3 read at 24h.This study demonstrates that the best sensitivity and specificity of rapid test for urease is obtained when fundic plus antral biopsy

  17. Numerically Simulated Impact of Gas Prandtl Number and Flow Model on Efficiency of the Machine-less Energetic Separation Device

    Directory of Open Access Journals (Sweden)

    K. S. Egorov

    2015-01-01

    Full Text Available The presented paper regards the influence of one of similarity criteria – the Prandtl number of gas (Pr - on the efficiency of the machine-less energetic separation device (Leontiev pipe, using numerical modeling in ANSYS software. This device, equally as Rank-Hilsch and Hartman-Schprenger pipes, is designed to separate one gas flow into two flows with different temperatures. One flow (supersonic streams out of the pipe with a temperature higher than initial and the other (subsonic flows out with a temperature lower than initial. This direction of energetic separation is true if the Prandtl number is less than 1 that corresponds to gases.The Prandtl number affects the efficiency of running Leontiev pipe indirectly both through a temperature difference on which a temperature recovery factor has an impact and through a thermal conductivity coefficient that shows the impact of heat transfer intensity between gas and solid wall.The Prandtl number range in the course of research was from 0.1 to 0.7. The Prandtl number value equal to 0.7 corresponds to the air or pure gases (for example, inert argon gas. The Prandtl number equal to 0.2 corresponds to the mixtures of inert gases such as helium-xenon.The numerical modeling completed for the supersonic flow with Mach number 2.0 shows that efficiency of the machine-less energetic separation device has been increased approximately 2 times with the Prandtl number decreasing from 0.7 to 0.2. Moreover, for the counter-flow scheme this effect is a little higher due to its larger heat efficiency in comparison with the straight-flow one.Also, the research shows that the main problem for the further increase of the Leontiev pipe efficiency is a small value of thermal conductivity coefficient, which requires an intensification of the heat exchange, especially in the supersonic flow. It can be obtained, for example, by using a system of oblique shock waves in the supersonic channel.

  18. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...

  19. A qualitative evaluation approach for energy system modelling frameworks

    DEFF Research Database (Denmark)

    Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord

    2018-01-01

    properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...

  20. Simulation of electric power conservation strategies: model of economic evaluation

    International Nuclear Information System (INIS)

    Pinhel, A.C.C.

    1992-01-01

    A methodology for the economic evaluation model for energy conservation programs to be executed by the National Program of Electric Power Conservation is presented. From data as: forecasting of conserved energy, tariffs, energy costs and budget, the model calculates the economic indexes for the programs, allowing the evaluation of economic impacts in the electric sector. (C.G.C.)

  1. The Use of AMET & Automated Scripts for Model Evaluation

    Science.gov (United States)

    Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.

  2. Modelling in Evaluating a Working Life Project in Higher Education

    Science.gov (United States)

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  3. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model.

    Science.gov (United States)

    Islam, Mohammad Mafijul; Alam, Morshed; Tariquzaman, Md; Kabir, Mohammad Alamgir; Pervin, Rokhsona; Begum, Munni; Khan, Md Mobarak Hossain

    2013-01-08

    Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance variable namely mother's education, father's education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh.

  4. Evaluation of biological models using Spacelab

    Science.gov (United States)

    Tollinger, D.; Williams, B. A.

    1980-01-01

    Biological models of hypogravity effects are described, including the cardiovascular-fluid shift, musculoskeletal, embryological and space sickness models. These models predict such effects as loss of extracellular fluid and electrolytes, decrease in red blood cell mass, and the loss of muscle and bone mass in weight-bearing portions of the body. Experimentation in Spacelab by the use of implanted electromagnetic flow probes, by fertilizing frog eggs in hypogravity and fixing the eggs at various stages of early development and by assessing the role of the vestibulocular reflex arc in space sickness is suggested. It is concluded that the use of small animals eliminates the uncertainties caused by corrective or preventive measures employed with human subjects.

  5. Shock circle model for ejector performance evaluation

    International Nuclear Information System (INIS)

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  6. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    Science.gov (United States)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size

  7. Splitting turbulence algorithm for mixing parameterization embedded in the ocean climate model. Examples of data assimilation and Prandtl number variations.

    Science.gov (United States)

    Moshonkin, Sergey; Gusev, Anatoly; Zalesny, Vladimir; Diansky, Nikolay

    2017-04-01

    Series of experiments were performed with a three-dimensional, free surface, sigma coordinate eddy-permitting ocean circulation model for Atlantic (from 30°S) - Arctic and Bering sea domain (0.25 degrees resolution, Institute of Numerical Mathematics Ocean Model or INMOM) using vertical grid refinement in the zone of fully developed turbulence (40 sigma-levels). The model variables are horizontal velocity components, potential temperature, and salinity as well as free surface height. For parameterization of viscosity and diffusivity, the original splitting turbulence algorithm (STA) is used when total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF) split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage the analytical solution was obtained for TKE and TDF as functions of the buoyancy and velocity shift frequencies (BF and VSF). The proposed model with STA is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. For mixing simulation in the zone of turbulence decay, the two kind numerical experiments were carried out, as with assimilation of annual mean climatic buoyancy frequency, as with variation of Prandtl number function dependence upon the BF, VSF, TKE and TDF. The CORE-II data for 1948-2009 were used for experiments. Quality of temperature T and salinity S structure simulation is estimated by the comparison of model monthly profiles T and S averaged for 1980-2009, with T and S monthly data from the World Ocean Atlas 2013. Form of coefficients in equations for TKE and TDF on the generation-dissipation stage makes it possible to assimilate annual mean climatic buoyancy frequency in a varying degree that cardinally improves adequacy of model results to climatic data in all analyzed model domain. The numerical experiments with modified

  8. How Many Model Evaluations Are Required To Predict The AEP Of A Wind Power Plant?

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Natarajan, Anand

    2015-01-01

    (AEP) predictions expensive. The objective of the present paper is to minimize the number of model evaluations required to capture the wind power plant's AEP using stationary wind farm flow models. Polynomial chaos techniques are proposed based on arbitrary Weibull distributed wind speed and Von Misses...... distributed wind direction. The correlation between wind direction and wind speed are captured by defining Weibull-parameters as functions of wind direction. In order to evaluate the accuracy of these methods the expectation and variance of the wind farm power distributions are compared against...... the traditional binning method with trapezoidal and Simpson's integration rules. The wind farm flow model used in this study is the semi-empirical wake model developed by Larsen [1]. Three test cases are studied: a single turbine, a simple and a real offshore wind power plant. A reduced number of model...

  9. A Model for Telestrok Network Evaluation

    DEFF Research Database (Denmark)

    Storm, Anna; Günzel, Franziska; Theiss, Stephan

    2011-01-01

    analysis lacking, current telestroke reimbursement by third-party payers is limited to special contracts and not included in the regular billing system. Based on a systematic literature review and expert interviews with health care economists, third-party payers and neurologists, a Markov model...... was developed from the third-party payer perspective. In principle, it enables telestroke networks to conduct cost-effectiveness studies, because the majority of the required data can be extracted from health insurance companies’ databases and the telestroke network itself. The model presents a basis...

  10. p-values for model evaluation

    International Nuclear Information System (INIS)

    Beaujean, F.; Caldwell, A.; Kollar, D.; Kroeninger, K.

    2011-01-01

    Deciding whether a model provides a good description of data is often based on a goodness-of-fit criterion summarized by a p-value. Although there is considerable confusion concerning the meaning of p-values, leading to their misuse, they are nevertheless of practical importance in common data analysis tasks. We motivate their application using a Bayesian argumentation. We then describe commonly and less commonly known discrepancy variables and how they are used to define p-values. The distribution of these are then extracted for examples modeled on typical data analysis tasks, and comments on their usefulness for determining goodness-of-fit are given.

  11. Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-12-20

    This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).

  12. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Modelling operator cognitive interactions in nuclear power plant safety evaluation

    International Nuclear Information System (INIS)

    Senders, J.W.; Moray, N.; Smiley, A.; Sellen, A.

    1985-08-01

    The overall objectives of the study were to review methods which are applicable to the analysis of control room operator cognitive interactions in nuclear plant safety evaluations and to indicate where future research effort in this area should be directed. This report is based on an exhaustive search and review of the literature on NPP (Nuclear Power Plant) operator error, human error, human cognitive function, and on human performance. A number of methods which have been proposed for the estimation of data for probabilistic risk analysis have been examined and have been found wanting. None addresses the problem of diagnosis error per se. Virtually all are concerned with the more easily detected and identified errors of action. None addresses underlying cause and mechanism. It is these mechanisms which must be understood if diagnosis errors and other cognitive errors are to be controlled and predicted. We have attempted to overcome the deficiencies of earlier work and have constructed a model/taxonomy, EXHUME, which we consider to be exhaustive. This construct has proved to be fruitful in organizing our thinking about the kinds of error that can occur and the nature of self-correcting mechanisms, and has guided our thinking in suggesting a research program which can provide the data needed for quantification of cognitive error rates and of the effects of mitigating efforts. In addition a preliminary outline of EMBED, a causal model of error, is given based on general behavioural research into perception, attention, memory, and decision making. 184 refs

  14. Simplified Entropic Model for the Evaluation of Suspended Load Concentration

    Directory of Open Access Journals (Sweden)

    Domenica Mirauda

    2018-03-01

    Full Text Available Suspended sediment concentration is a key aspect in the forecasting of river evolution dynamics, as well as in water quality assessment, evaluation of reservoir impacts, and management of water resources. The estimation of suspended load often relies on empirical models, of which efficiency is limited by their analytic structure or by the need for calibration parameters. The present work deals with a simplified fully-analytical formulation of the so-called entropic model in order to reproduce the vertical distribution of sediment concentration. The simplification consists in the leading order expansion of the generalized spatial coordinate of the entropic velocity profile that, strictly speaking, applies to the near-bed region, but that provides acceptable results also near the free surface. The proposed closed-form solution, which highlights the interplay among channel morphology, stream power, secondary flows, and suspended transport features, allows reducing the needed number of field measurements and, therefore, the time of field activities. Its accuracy and robustness were successfully tested based on the comparison with laboratory data reported in literature.

  15. Effects of non-LTE multiplet dynamics on lumped-state modelling in moderate to high atomic number plasmas

    International Nuclear Information System (INIS)

    Whitney, K G; Dasgupta, A; Davis, J; Coverdale, C A

    2007-01-01

    Two atomic models of the population dynamics of substates within the n 4 and n = 3 multiplets of nickel-like tungsten and beryllium-like iron, respectively, are described in this paper. The flexible atomic code (FAC) is used to calculate the collisional and radiative couplings and energy levels of the excited states within these ionization stages. These atomic models are then placed within larger principal-quantum-number-based ionization dynamic models of both tungsten and iron plasmas. Collisional-radiative equilibrium calculations are then carried out using these models that demonstrate how the multiplet substates depart from local thermodynamic equilibrium (LTE) as a function of ion density. The effect of these deviations from LTE on the radiative and collisional deexcitation rates of lumped 3s, 3p, 3d, 4s, 4p, 4d and 4f states is then calculated and least-squares fits to the density dependence of these lumped-state rate coefficients are obtained. The calculations show that, with the use of lumped-state models (which are in common use), one can accurately model the L- and M-shell ionization dynamics occurring in present-day Z-pinch experiments only through the addition of these extra, non-LTE-induced, rate coefficient density dependences. However, the derivation and use of low-order polynomial fits to these density dependences makes lumped-state modelling both viable and of value for post-processing analyses

  16. Random forest predictive modeling of mineral prospectivity with small number of prospects and data with missing values in Abra (Philippines)

    Science.gov (United States)

    Carranza, Emmanuel John M.; Laborte, Alice G.

    2015-01-01

    Machine learning methods that have been used in data-driven predictive modeling of mineral prospectivity (e.g., artificial neural networks) invariably require large number of training prospect/locations and are unable to handle missing values in certain evidential data. The Random Forests (RF) algorithm, which is a machine learning method, has recently been applied to data-driven predictive mapping of mineral prospectivity, and so it is instructive to further study its efficacy in this particular field. This case study, carried out using data from Abra (Philippines), examines (a) if RF modeling can be used for data-driven modeling of mineral prospectivity in areas with a few (i.e., individual layers of evidential data. Furthermore, RF modeling can handle missing values in evidential data through an RF-based imputation technique whereas in WofE modeling values are simply represented by zero weights. Therefore, the RF algorithm is potentially more useful than existing methods that are currently used for data-driven predictive mapping of mineral prospectivity. In particular, it is not a purely black-box method like artificial neural networks in the context of data-driven predictive modeling of mineral prospectivity. However, further testing of the method in other areas with a few mineral occurrences is needed to fully investigate its usefulness in data-driven predictive modeling of mineral prospectivity.

  17. Evaluation of consumer satisfaction using the tetra-class model.

    Science.gov (United States)

    Clerfeuille, Fabrice; Poubanne, Yannick; Vakrilova, Milena; Petrova, Guenka

    2008-09-01

    A number of studies have shown the importance of consumers' satisfaction toward pharmacy services. The measurement of patient satisfaction through different elements of services provided is challenging within the context of a dynamic economic environment. Patient satisfaction is the result of long-term established habits and expectations to the pharmacy as an institution. Few studies to date have attempted to discern whether these changes have led to increased patient satisfaction and loyalty, particularly within developing nations. The objective of this study was to evaluate the elements of the services provided in Bulgarian pharmacies and their contribution to consumer satisfaction using a tetra-class model. Three main hypotheses were tested in pharmacies to validate the model in the case of complex services. Additionally, the contribution of the different service elements to the clients' satisfaction was studied. The analysis was based on a survey of customers in central and district pharmacies in Sofia, Bulgaria. The data were analyzed through a correspondence analysis which was applied to the results of the 752 distributed questionnaires. It was observed that different dimensions of the pharmacies contribute uniquely to customer satisfaction, with consumer gender contributing greatly toward satisfaction, with type/location of pharmacy, consumer age, and educational degree also playing a part. The duration of time over which the consumers have been clients at a given pharmacy influences the subsequent service categorization. This research demonstrated that the tetra-class model is suitable for application in the pharmaceutical sector. The model results could be beneficial for both researchers and pharmacy managers.

  18. Evaluation Model of Tea Industry Information Service Quality

    OpenAIRE

    Shi , Xiaohui; Chen , Tian’en

    2015-01-01

    International audience; According to characteristics of tea industry information service, this paper have built service quality evaluation index system for tea industry information service quality, R-cluster analysis and multiple regression have been comprehensively used to contribute evaluation model with a high practice and credibility. Proved by the experiment, the evaluation model of information service quality has a good precision, which has guidance significance to a certain extent to e...

  19. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  20. Model visualization for evaluation of biocatalytic processes

    DEFF Research Database (Denmark)

    Law, HEM; Lewis, DJ; McRobbie, I

    2008-01-01

    Biocatalysis offers great potential as an additional, and in some cases as an alternative, synthetic tool for organic chemists, especially as a route to introduce chirality. However, the implementation of scalable biocatalytic processes nearly always requires the introduction of process and/or bi......,S-EDDS), a biodegradable chelant, and is characterised by the use of model visualization using `windows of operation"....

  1. Evaluating a Model of Youth Physical Activity

    Science.gov (United States)

    Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary

    2010-01-01

    Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…

  2. An evaluation of uncertainties in radioecological models

    International Nuclear Information System (INIS)

    Hoffmann, F.O.; Little, C.A.; Miller, C.W.; Dunning, D.E. Jr.; Rupp, E.M.; Shor, R.W.; Schaeffer, D.L.; Baes, C.F. III

    1978-01-01

    The paper presents results of analyses for seven selected parameters commonly used in environmental radiological assessment models, assuming that the available data are representative of the true distribution of parameter values and that their respective distributions are lognormal. Estimates of the most probable, median, mean, and 99th percentile for each parameter are fiven and compared to U.S. NRC default values. The regulatory default values are generally greater than the median values for the selected parameters, but some are associated with percentiles significantly less than the 50th. The largest uncertainties appear to be associated with aquatic bioaccumulation factors for fresh water fish. Approximately one order of magnitude separates median values and values of the 99th percentile. The uncertainty is also estimated for the annual dose rate predicted by a multiplicative chain model for the transport of molecular iodine-131 via the air-pasture-cow-milk-child's thyroid pathway. The value for the 99th percentile is ten times larger than the median value of the predicted dose normalized for a given air concentration of 131 I 2 . About 72% of the uncertainty in this model is contributed by the dose conversion factor and the milk transfer coefficient. Considering the difficulties in obtaining a reliable quantification of the true uncertainties in model predictions, methods for taking these uncertainties into account when determining compliance with regulatory statutes are discussed. (orig./HP) [de

  3. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  4. A MULTILAYER BIOCHEMICAL DRY DEPOSITION MODEL 2. MODEL EVALUATION

    Science.gov (United States)

    The multilayer biochemical dry deposition model (MLBC) described in the accompanying paper was tested against half-hourly eddy correlation data from six field sites under a wide range of climate conditions with various plant types. Modeled CO2, O3, SO2<...

  5. Bounds on the number of bound states in the transfer matrix spectrum for some weakly correlated lattice models

    International Nuclear Information System (INIS)

    O’Carroll, Michael

    2012-01-01

    We consider the interaction of particles in weakly correlated lattice quantum field theories. In the imaginary time functional integral formulation of these theories there is a relative coordinate lattice Schroedinger operator H which approximately describes the interaction of these particles. Scalar and vector spin, QCD and Gross-Neveu models are included in these theories. In the weakly correlated regime H=H o +W where H o =−γΔ l , 0 l is the d-dimensional lattice Laplacian: γ=β, the inverse temperature for spin systems and γ=κ 3 where κ is the hopping parameter for QCD. W is a self-adjoint potential operator which may have non-local contributions but obeys the bound ‖W(x, y)‖⩽cexp ( −a(‖x‖+‖y‖)), a large: exp−a=β/β o (1/2) (κ/κ o ) for spin (QCD) models. H o , W, and H act in l 2 (Z d ), d⩾ 1. The spectrum of H below zero is known to be discrete and we obtain bounds on the number of states below zero. This number depends on the short range properties of W, i.e., the long range tail does not increase the number of states.

  6. RTMOD: Real-Time MODel evaluation

    DEFF Research Database (Denmark)

    Graziani, G.; Galmarini, S.; Mikkelsen, Torben

    2000-01-01

    the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...

  7. A Descriptive Evaluation of Software Sizing Models

    Science.gov (United States)

    1987-09-01

    2-22 2.3.2 SPQR Sizer/FP ............................... 2-25 2.3.3 QSM Size Planner: Function Points .......... 2-26 2.3.4 Feature...Characteristics ............................. 4-20 4.5.3 Results and Conclusions ..................... 4-20 4.6 Application of the SPQR SIZER/FP Approach...4-19 4-7 SPQR Function Point Estimate for the CATSS Sensitivity Model .................................................. 4-23 4-8 ASSET-R

  8. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    Science.gov (United States)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  9. Enhancement in Evaluating Small Group Work in Courses with Large Number of Students. Machine Theory at Industrial Engineering Degrees

    Science.gov (United States)

    Jordi Nebot, Lluïsa; Pàmies-Vilà, Rosa; Català Calderon, Pau; Puig-Ortiz, Joan

    2013-01-01

    This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d'Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya). These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process.…

  10. The effect of numbered heads together (NHT) cooperative learning model on the cognitive achievement of students with different academic ability

    Science.gov (United States)

    Leasa, Marleny; Duran Corebima, Aloysius

    2017-01-01

    Learning models and academic ability may affect students’ achievement in science. This study, thus aimed to investigate the effect of numbered heads together (NHT) cooperative learning model on elementary students’ cognitive achievement in natural science. This study employed a quasi-experimental design with pretest-posttest non-equivalent control group with 2 x 2 factorial. There were two learning models compared NHT and the conventional, and two academic ability high and low. The results of ana Cova test confirmed the difference in the students’ cognitive achievement based on learning models and general academic ability. However, the interaction between learning models and academic ability did not affect the students’ cognitive achievement. In conclusion, teachers are strongly recommended to be more creative in designing learning using other types of cooperative learning models. Also, schools are required to create a better learning environment which is more cooperative to avoid unfair competition among students in the classroom and as a result improve the students’ academic ability. Further research needs to be conducted to explore the contribution of other aspects in cooperative learning toward cognitive achievement of students with different academic ability.

  11. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. The Effect of Sample Size and Data Numbering on Precision of Calibration Model to predict Soil Properties

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2017-10-01

    Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of

  13. Effects of the virtual particle number on the S matrix of the (phi4)/sub 1+1/ model

    International Nuclear Information System (INIS)

    Kroeger, H.; Girard, R.; Dufour, G.

    1987-01-01

    We present results of the S matrix in the (phi 4 )/sub 1 + 1/ model obtained by a nonperturbative calculation using a momentum-space discretization technique. First, we calculate the two-body S matrix in the strong-coupling regime (up to λ/sub eff/ = 3), with the restriction of taking into account only two-body virtual particle states. We find agreement with standard perturbation theory obtained by summing up the corresponding graphs to infinite order. We also estimate the effect of mass renormalization. Second, we investigate the effect of including higher virtual particle numbers in two-particle scattering in the cases λ/sub eff/ = (1/6) and λ/sub eff/ = 1. In both cases we find convergence of the S matrix with respect to increasing the virtual-particle-number cutoff

  14. Influence of Coloured Correlated Noises on Probability Distribution and Mean of Tumour Cell Number in the Logistic Growth Model

    Institute of Scientific and Technical Information of China (English)

    HAN Li-Bo; GONG Xiao-Long; CAO Li; WU Da-Jin

    2007-01-01

    An approximate Fokker-P1anck equation for the logistic growth model which is driven by coloured correlated noises is derived by applying the Novikov theorem and the Fox approximation. The steady-state probability distribution (SPD) and the mean of the tumour cell number are analysed. It is found that the SPD is the single extremum configuration when the degree of correlation between the multiplicative and additive noises, λ, is in -1<λ ≤ 0 and can be the double extrema in 0<λ<1. A configuration transition occurs because of the variation of noise parameters. A minimum appears in the curve of the mean of the steady-state tumour cell number, 〈x〉, versus λ. The position and the value of the minimum are controlled by the noise-correlated times.

  15. Studies of the Effects of Perfluorocarbon Emulsions on Platelet Number and Function in Models of Critical Battlefield Injury

    Science.gov (United States)

    2016-09-01

    notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it...Jiepei Zhu, MD PhD, Bruce D. Spiess, MD. Evaluation of Noninvasive Cardiac Output Monitoring in Sheep with Hemodynamic Instability 2. 2016,Aug, Military...Zhu, J., Holloway, K.L., Parsons, J.T. Intracerebral hematoma incidence in a coagulopathic sheep model of deep brain stimulation (DBS) surgery

  16. A model for photothermal responses of flowering in rice. II. Model evaluation.

    NARCIS (Netherlands)

    Yin, X.; Kropff, M.J.; Nakagawa, H.; Horie, T.; Goudriaan, J.

    1997-01-01

    A detailed nonlinear model, the 3s-Beta model, for photothermal responses of flowering in rice (Oryza sativa L.) was evaluated for predicting rice flowering date in field conditions. This model was compared with other three models: a three-plane linear model and two nonlinear models, viz, the

  17. Evaluating the double Poisson generalized linear model.

    Science.gov (United States)

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Formal Process Modeling to Improve Human Decision-Making in Test and Evaluation Acoustic Range Control

    Science.gov (United States)

    2017-09-01

    MODELING TO IMPROVE HUMAN DECISION-MAKING DURING TEST AND EVALUATION RANGE CONTROL by William Carlson September 2017 Thesis Advisor...the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT...MAKING DURING TEST AND EVALUATION RANGE CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) William Carlson 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  19. Design Concept Evaluation Using System Throughput Model

    International Nuclear Information System (INIS)

    Sequeira, G.; Nutt, W. M.

    2004-01-01

    The U.S. Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently developing the technical bases to support the submittal of a license application for construction of a geologic repository at Yucca Mountain, Nevada to the U.S. Nuclear Regulatory Commission. The Office of Repository Development (ORD) is responsible for developing the design of the proposed repository surface facilities for the handling of spent nuclear fuel and high level nuclear waste. Preliminary design activities are underway to sufficiently develop the repository surface facilities design for inclusion in the license application. The design continues to evolve to meet mission needs and to satisfy both regulatory and program requirements. A system engineering approach is being used in the design process since the proposed repository facilities are dynamically linked by a series of sub-systems and complex operations. In addition, the proposed repository facility is a major system element of the overall waste management process being developed by the OCRWM. Such an approach includes iterative probabilistic dynamic simulation as an integral part of the design evolution process. A dynamic simulation tool helps to determine if: (1) the mission and design requirements are complete, robust, and well integrated; (2) the design solutions under development meet the design requirements and mission goals; (3) opportunities exist where the system can be improved and/or optimized; and (4) proposed changes to the mission, and design requirements have a positive or negative impact on overall system performance and if design changes may be necessary to satisfy these changes. This paper will discuss the type of simulation employed to model the waste handling operations. It will then discuss the process being used to develop the Yucca Mountain surface facilities model. The latest simulation model and the results of the simulation and how the data were used in the design

  20. A two-angle model of dynamic wetting in microscale capillaries under low capillary numbers with experiments.

    Science.gov (United States)

    Lei, Da; Lin, Mian; Li, Yun; Jiang, Wenbin

    2018-06-15

    An accurate model of the dynamic contact angle θ d is critical for the calculation of capillary force in applications like enhanced oil recovery, where the capillary number Ca ranges from 10 -10 to 10 -5 and the Bond number Bo is less than 10 -4 . The rate-dependence of the dynamic contact angle under such conditions remains blurred, and is the main target of this study. Featuring with pressure control and interface tracking, the innovative experimental system presented in this work achieves the desired ranges of Ca and Bo, and enables the direct optical measurement of dynamic contact angles in capillaries as tiny as 40 × 20 (width × height) μm and 80 × 20 μm. The advancing and receding processes of wetting and nonwetting liquids were tested. The dynamic contact angle was confirmed velocity-independent with 10 -9  contact line velocity V = 0.135-490 μm/s) and it can be described by a two-angle model with desirable accuracy. A modified two-angle model was developed and an empirical form was obtained from experiments. For different liquids contacting the same surface, the advancing angle θ adv approximately equals the static contact angle θ o . The receding angle θ rec was found to be a linear function of θ adv , in good agreement with our and other experiments from the literature. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  2. Number projection method

    International Nuclear Information System (INIS)

    Kaneko, K.

    1987-01-01

    A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states

  3. Integer linear models with a polynomial number of variables and constraints for some classical combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Nelson Maculan

    2003-01-01

    Full Text Available We present integer linear models with a polynomial number of variables and constraints for combinatorial optimization problems in graphs: optimum elementary cycles, optimum elementary paths and optimum tree problems.Apresentamos modelos lineares inteiros com um número polinomial de variáveis e restrições para problemas de otimização combinatória em grafos: ciclos elementares ótimos, caminhos elementares ótimos e problemas em árvores ótimas.

  4. Tomography of atomic number and density of materials using dual-energy imaging and the Alvarez and Macovski attenuation model

    Energy Technology Data Exchange (ETDEWEB)

    Paziresh, M.; Kingston, A. M., E-mail: andrew.kingston@anu.edu.au; Latham, S. J.; Fullagar, W. K.; Myers, G. M. [Department of Applied Mathematics, Research School of physics and Engineering, The Australian National University, Canberra 2601 (Australia)

    2016-06-07

    Dual-energy computed tomography and the Alvarez and Macovski [Phys. Med. Biol. 21, 733 (1976)] transmitted intensity (AMTI) model were used in this study to estimate the maps of density (ρ) and atomic number (Z) of mineralogical samples. In this method, the attenuation coefficients are represented [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)] in the form of the two most important interactions of X-rays with atoms that is, photoelectric absorption (PE) and Compton scattering (CS). This enables material discrimination as PE and CS are, respectively, dependent on the atomic number (Z) and density (ρ) of materials [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)]. Dual-energy imaging is able to identify sample materials even if the materials have similar attenuation coefficients at single-energy spectrum. We use the full model rather than applying one of several applied simplified forms [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976); Siddiqui et al., SPE Annual Technical Conference and Exhibition (Society of Petroleum Engineers, 2004); Derzhi, U.S. patent application 13/527,660 (2012); Heismann et al., J. Appl. Phys. 94, 2073–2079 (2003); Park and Kim, J. Korean Phys. Soc. 59, 2709 (2011); Abudurexiti et al., Radiol. Phys. Technol. 3, 127–135 (2010); and Kaewkhao et al., J. Quant. Spectrosc. Radiat. Transfer 109, 1260–1265 (2008)]. This paper describes the tomographic reconstruction of ρ and Z maps of mineralogical samples using the AMTI model. The full model requires precise knowledge of the X-ray energy spectra and calibration of PE and CS constants and exponents of atomic number and energy that were estimated based on fits to simulations and calibration measurements. The estimated ρ and Z images of the samples used in this paper yield average relative errors of 2.62% and 1.19% and maximum relative errors of 2.64% and 7.85%, respectively. Furthermore, we demonstrate that the method accounts for the beam hardening effect in density (

  5. Dynamic analysis of an SDOF helicopter model featuring skid landing gear and an MR damper by considering the rotor lift factor and a Bingham number

    Science.gov (United States)

    Saleh, Muftah; Sedaghati, Ramin; Bhat, Rama

    2018-06-01

    The present study addresses the performance of a skid landing gear (SLG) system of a rotorcraft impacting the ground at a vertical sink rate of up to 4.5 ms‑1. The impact attitude is assumed to be level as per chapter 527 of the Airworthiness Manual of Transport Canada Civil Aviation and part 27 of the Federal Aviation Regulations of the US Federal Aviation Administration. A single degree of freedom helicopter model is investigated under different values of rotor lift factor, L. In this study, three SLG versions are evaluated: (a) standalone conventional SLG; (b) SLG equipped with a passive viscous damper; and (c) SLG incorporated a magnetorheological energy absorber (MREA). The non-dimensional solutions of the helicopter models show that the two former SLG systems suffer adaptability issues with variations in the impact velocity and the rotor lift factor. Therefore, the alternative successful choice is to employ the MREA. Two different optimum Bingham numbers for compression and rebound strokes are defined. A new chart, called the optimum Bingham number versus rotor lift factor ‘B{i}o-L’, is introduced in this study to correlate the optimum Bingham numbers to the variation in the rotor lift factor and to provide more accessibility from the perspective of control design. The chart shows that the optimum Bingham number for the compression stroke is inversely linearly proportional to the increase in the rotor lift factor. This alleviates the impact force on the system and reduces the amount of magnetorheological yield force that would be generated. On the contrary, the optimum Bingham number for the rebound stroke is found to be directly linearly proportional to the rotor lift factor. This ensures controllable attenuation of the restoring force of the linear spring element. This idea can be exploited to generate charts for different landing attitudes and sink rates. In this article, the response of the helicopter equipped with the conventional undamped, damped

  6. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  7. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  8. Promoting Excellence in Nursing Education (PENE): Pross evaluation model.

    Science.gov (United States)

    Pross, Elizabeth A

    2010-08-01

    The purpose of this article is to examine the Promoting Excellence in Nursing Education (PENE) Pross evaluation model. A conceptual evaluation model, such as the one described here, may be useful to nurse academicians in the ongoing evaluation of educational programs, especially those with goals of excellence. Frameworks for evaluating nursing programs are necessary because they offer a way to systematically assess the educational effectiveness of complex nursing programs. This article describes the conceptual framework and its tenets of excellence. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Innovations in Tertiary Education Financing: A Comparative Evaluation of Allocation Mechanisms. Education Working Paper Series. Number 4

    Science.gov (United States)

    Salmi, Jamil; Hauptman, Arthur M.

    2006-01-01

    In recent decades, a growing number of countries have sought innovative solutions to the substantial challenges they face in financing tertiary education. One of the principal challenges is that the demand for education beyond the secondary level in most countries around the world is growing far faster than the ability or willingness of…

  10. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  11. Evaluation of atmospheric dispersion/consequence models supporting safety analysis

    International Nuclear Information System (INIS)

    O'Kula, K.R.; Lazaro, M.A.; Woodard, K.

    1996-01-01

    Two DOE Working Groups have completed evaluation of accident phenomenology and consequence methodologies used to support DOE facility safety documentation. The independent evaluations each concluded that no one computer model adequately addresses all accident and atmospheric release conditions. MACCS2, MATHEW/ADPIC, TRAC RA/HA, and COSYMA are adequate for most radiological dispersion and consequence needs. ALOHA, DEGADIS, HGSYSTEM, TSCREEN, and SLAB are recommended for chemical dispersion and consequence applications. Additional work is suggested, principally in evaluation of new models, targeting certain models for continued development, training, and establishing a Web page for guidance to safety analysts

  12. Comparison between a sire model and an animal model for genetic evaluation of fertility traits in Danish Holstein population

    DEFF Research Database (Denmark)

    Sun, C; Madsen, P; Nielsen, U S

    2009-01-01

    Comparisons between a sire model, a sire-dam model, and an animal model were carried out to evaluate the ability of the models to predict breeding values of fertility traits, based on data including 471,742 records from the first lactation of Danish Holstein cows, covering insemination years from...... the results suggest that the animal model, rather than the sire model, should be used for genetic evaluation of fertility traits......Comparisons between a sire model, a sire-dam model, and an animal model were carried out to evaluate the ability of the models to predict breeding values of fertility traits, based on data including 471,742 records from the first lactation of Danish Holstein cows, covering insemination years from...... 1995 to 2004. The traits in the analysis were days from calving to first insemination, calving interval, days open, days from first to last insemination, number of inseminations per conception, and nonreturn rate within 56 d after first service. The correlations between sire estimated breeding value...

  13. A simple testable model of baryon number violation: Baryogenesis, dark matter, neutron-antineutron oscillation and collider signals

    Science.gov (United States)

    Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar

    2018-04-01

    We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.

  14. The effect of the number of condensed phases modeled on aerosol behavior during an induced steam generator tube rupture sequence

    International Nuclear Information System (INIS)

    Bixler, N.E.; Schaperow, J.H.

    1998-06-01

    VICTORIA is a mechanistic computer code designed to analyze fission product behavior within a nuclear reactor coolant system (RCS) during a severe accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS. A recently completed independent peer review of VICTORIA, while confirming the overall adequacy of the code, recommended a number of modeling improvements. One of these recommendations, to model three rather than a single condensed phase, is the focus of the work reported here. The recommendation has been implemented as an option so that either a single or three condensed phases can be treated. Both options have been employed in the study of fission product behavior during an induced steam generator tube rupture sequence. Differences in deposition patterns and mechanisms predicted using these two options are discussed

  15. Analysis of the physical properties of trehalose-water-lithium iodide based on the bond strength coordination number fluctuation model

    International Nuclear Information System (INIS)

    Sahara; Jean L Ndeugueu; Masaru Aniya

    2010-01-01

    The temperature dependence of the viscosity of trehalose-water-lithium iodide system has been investigated by the mean of the Bond Strength Coordination Number Fluctuation (BSCNF) model. The result indicates that by increasing the trehalose content, maintaining the content of LiI constant, the fragility decreases due to the increase of the connectivity between the structural units. Our analysis suggests also that the fragility of the system is controlled by the amount of water in the composition. By increasing the water content, the total bond strength decreases and its fluctuation increases, resulting in the increase of the fragility. Based on the analysis of the obtained parameters of the BSCNF model, a physical interpretation of the VFT parameters reported in a previous study has been given. (author)

  16. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  17. FARMLAND: Model description and evaluation of model performance

    Energy Technology Data Exchange (ETDEWEB)

    Attwood, C; Fayers, C; Mayall, A; Brown, J; Simmonds, J R [National Radiological Protection Board, Chilton (United Kingdom)

    1996-09-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs.

  18. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  19. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  20. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    Science.gov (United States)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  1. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  2. Evaluation of SMN protein, transcript, and copy number in the biomarkers for spinal muscular atrophy (BforSMA clinical study.

    Directory of Open Access Journals (Sweden)

    Thomas O Crawford

    Full Text Available The universal presence of a gene (SMN2 nearly identical to the mutated SMN1 gene responsible for Spinal Muscular Atrophy (SMA has proved an enticing incentive to therapeutics development. Early disappointments from putative SMN-enhancing agent clinical trials have increased interest in improving the assessment of SMN expression in blood as an early "biomarker" of treatment effect.A cross-sectional, single visit, multi-center design assessed SMN transcript and protein in 108 SMA and 22 age and gender-matched healthy control subjects, while motor function was assessed by the Modified Hammersmith Functional Motor Scale (MHFMS. Enrollment selectively targeted a broad range of SMA subjects that would permit maximum power to distinguish the relative influence of SMN2 copy number, SMA type, present motor function, and age.SMN2 copy number and levels of full-length SMN2 transcripts correlated with SMA type, and like SMN protein levels, were lower in SMA subjects compared to controls. No measure of SMN expression correlated strongly with MHFMS. A key finding is that SMN2 copy number, levels of transcript and protein showed no correlation with each other.This is a prospective study that uses the most advanced techniques of SMN transcript and protein measurement in a large selectively-recruited cohort of individuals with SMA. There is a relationship between measures of SMN expression in blood and SMA type, but not a strong correlation to motor function as measured by the MHFMS. Low SMN transcript and protein levels in the SMA subjects relative to controls suggest that these measures of SMN in accessible tissues may be amenable to an "early look" for target engagement in clinical trials of putative SMN-enhancing agents. Full length SMN transcript abundance may provide insight into the molecular mechanism of phenotypic variation as a function of SMN2 copy number.Clinicaltrials.gov NCT00756821.

  3. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    Science.gov (United States)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  4. Linear multivariate evaluation models for spatial perception of soundscape.

    Science.gov (United States)

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  5. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance.

    Science.gov (United States)

    Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon

    2018-01-01

    The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  6. Automated Text Analysis Based on Skip-Gram Model for Food Evaluation in Predicting Consumer Acceptance

    Directory of Open Access Journals (Sweden)

    Augustine Yongwhi Kim

    2018-01-01

    Full Text Available The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers’ online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles. To avoid building a sensory word lexicon, consumers’ reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.

  7. Comparative evaluation of direct plating and most probable number for enumeration of low levels of Listeria monocytogenes in naturally contaminated ice cream products.

    Science.gov (United States)

    Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru

    2017-01-16

    A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.

  8. Biology learning evaluation model in Senior High Schools

    Directory of Open Access Journals (Sweden)

    Sri Utari

    2017-06-01

    Full Text Available The study was to develop a Biology learning evaluation model in senior high schools that referred to the research and development model by Borg & Gall and the logic model. The evaluation model included the components of input, activities, output and outcomes. The developing procedures involved a preliminary study in the form of observation and theoretical review regarding the Biology learning evaluation in senior high schools. The product development was carried out by designing an evaluation model, designing an instrument, performing instrument experiment and performing implementation. The instrument experiment involved teachers and Students from Grade XII in senior high schools located in the City of Yogyakarta. For the data gathering technique and instrument, the researchers implemented observation sheet, questionnaire and test. The questionnaire was applied in order to attain information regarding teacher performance, learning performance, classroom atmosphere and scientific attitude; on the other hand, test was applied in order to attain information regarding Biology concept mastery. Then, for the analysis of instrument construct, the researchers performed confirmatory factor analysis by means of Lisrel 0.80 software and the results of this analysis showed that the evaluation instrument valid and reliable. The construct validity was between 0.43-0.79 while the reliability of measurement model was between 0.88-0.94. Last but not the least, the model feasibility test showed that the theoretical model had been supported by the empirical data.

  9. Reduced Synapse and Axon Numbers in the Prefrontal Cortex of Rats Subjected to a Chronic Stress Model for Depression

    Science.gov (United States)

    Csabai, Dávid; Wiborg, Ove; Czéh, Boldizsár

    2018-01-01

    Stressful experiences can induce structural changes in neurons of the limbic system. These cellular changes contribute to the development of stress-induced psychopathologies like depressive disorders. In the prefrontal cortex of chronically stressed animals, reduced dendritic length and spine loss have been reported. This loss of dendritic material should consequently result in synapse loss as well, because of the reduced dendritic surface. But so far, no one studied synapse numbers in the prefrontal cortex of chronically stressed animals. Here, we examined synaptic contacts in rats subjected to an animal model for depression, where animals are exposed to a chronic stress protocol. Our hypothesis was that long term stress should reduce the number of axo-spinous synapses in the medial prefrontal cortex. Adult male rats were exposed to daily stress for 9 weeks and afterward we did a post mortem quantitative electron microscopic analysis to quantify the number and morphology of synapses in the infralimbic cortex. We analyzed asymmetric (Type I) and symmetric (Type II) synapses in all cortical layers in control and stressed rats. We also quantified axon numbers and measured the volume of the infralimbic cortex. In our systematic unbiased analysis, we examined 21,000 axon terminals in total. We found the following numbers in the infralimbic cortex of control rats: 1.15 × 109 asymmetric synapses, 1.06 × 108 symmetric synapses and 1.00 × 108 myelinated axons. The density of asymmetric synapses was 5.5/μm3 and the density of symmetric synapses was 0.5/μm3. Average synapse membrane length was 207 nm and the average axon terminal membrane length was 489 nm. Stress reduced the number of synapses and myelinated axons in the deeper cortical layers, while synapse membrane lengths were increased. These stress-induced ultrastructural changes indicate that neurons of the infralimbic cortex have reduced cortical network connectivity. Such reduced network connectivity is likely

  10. Reduced Synapse and Axon Numbers in the Prefrontal Cortex of Rats Subjected to a Chronic Stress Model for Depression

    Directory of Open Access Journals (Sweden)

    Dávid Csabai

    2018-01-01

    Full Text Available Stressful experiences can induce structural changes in neurons of the limbic system. These cellular changes contribute to the development of stress-induced psychopathologies like depressive disorders. In the prefrontal cortex of chronically stressed animals, reduced dendritic length and spine loss have been reported. This loss of dendritic material should consequently result in synapse loss as well, because of the reduced dendritic surface. But so far, no one studied synapse numbers in the prefrontal cortex of chronically stressed animals. Here, we examined synaptic contacts in rats subjected to an animal model for depression, where animals are exposed to a chronic stress protocol. Our hypothesis was that long term stress should reduce the number of axo-spinous synapses in the medial prefrontal cortex. Adult male rats were exposed to daily stress for 9 weeks and afterward we did a post mortem quantitative electron microscopic analysis to quantify the number and morphology of synapses in the infralimbic cortex. We analyzed asymmetric (Type I and symmetric (Type II synapses in all cortical layers in control and stressed rats. We also quantified axon numbers and measured the volume of the infralimbic cortex. In our systematic unbiased analysis, we examined 21,000 axon terminals in total. We found the following numbers in the infralimbic cortex of control rats: 1.15 × 109 asymmetric synapses, 1.06 × 108 symmetric synapses and 1.00 × 108 myelinated axons. The density of asymmetric synapses was 5.5/μm3 and the density of symmetric synapses was 0.5/μm3. Average synapse membrane length was 207 nm and the average axon terminal membrane length was 489 nm. Stress reduced the number of synapses and myelinated axons in the deeper cortical layers, while synapse membrane lengths were increased. These stress-induced ultrastructural changes indicate that neurons of the infralimbic cortex have reduced cortical network connectivity. Such reduced network

  11. Using Models of Cognition in HRI Evaluation and Design

    National Research Council Canada - National Science Library

    Goodrich, Michael A

    2004-01-01

    ...) guide the construction of experiments. In this paper, we present an information processing model of cognition that we have used extensively in designing and evaluating interfaces and autonomy modes...

  12. Evaluation of one dimensional analytical models for vegetation canopies

    Science.gov (United States)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  13. Determining the Number of Participants Needed for the Usability Evaluation of E-Learning Resources: A Monte Carlo Simulation

    Science.gov (United States)

    Davids, Mogamat Razeen; Harvey, Justin; Halperin, Mitchell L.; Chikte, Usuf M. E.

    2015-01-01

    The usability of computer interfaces has a major influence on learning. Optimising the usability of e-learning resources is therefore essential. However, this may be neglected because of time and monetary constraints. User testing is a common approach to usability evaluation and involves studying typical end-users interacting with the application…

  14. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  15. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    Science.gov (United States)

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. A Universal Model for the Normative Evaluation of Internet Information.

    NARCIS (Netherlands)

    Spence, E.H.

    2009-01-01

    Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s

  17. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  18. A Convergent Participation Model for Evaluation of Learning Objects

    Directory of Open Access Journals (Sweden)

    John Nesbit

    2002-10-01

    Full Text Available The properties that distinguish learning objects from other forms of educational software - global accessibility, metadata standards, finer granularity and reusability - have implications for evaluation. This article proposes a convergent participation model for learning object evaluation in which representatives from stakeholder groups (e.g., students, instructors, subject matter experts, instructional designers, and media developers converge toward more similar descriptions and ratings through a two-stage process supported by online collaboration tools. The article reviews evaluation models that have been applied to educational software and media, considers models for gathering and meta-evaluating individual user reviews that have recently emerged on the Web, and describes the peer review model adopted for the MERLOT repository. The convergent participation model is assessed in relation to other models and with respect to its support for eight goals of learning object evaluation: (1 aid for searching and selecting, (2 guidance for use, (3 formative evaluation, (4 influence on design practices, (5 professional development and student learning, (6 community building, (7 social recognition, and (8 economic exchange.

  19. Nanoparticle filtration performance of NIOSH-certified particulate air-purifying filtering facepiece respirators: evaluation by light scattering photometric and particle number-based test methods.

    Science.gov (United States)

    Rengasamy, Samy; Eimer, Benjamin C

    2012-01-01

    National Institute for Occupational Safety and Health (NIOSH) certification test methods employ charge neutralized NaCl or dioctyl phthalate (DOP) aerosols to measure filter penetration levels of air-purifying particulate respirators photometrically using a TSI 8130 automated filter tester at 85 L/min. A previous study in our laboratory found that widely different filter penetration levels were measured for nanoparticles depending on whether a particle number (count)-based detector or a photometric detector was used. The purpose of this study was to better understand the influence of key test parameters, including filter media type, challenge aerosol size range, and detector system. Initial penetration levels for 17 models of NIOSH-approved N-, R-, and P-series filtering facepiece respirators were measured using the TSI 8130 photometric method and compared with the particle number-based penetration (obtained using two ultrafine condensation particle counters) for the same challenge aerosols generated by the TSI 8130. In general, the penetration obtained by the photometric method was less than the penetration obtained with the number-based method. Filter penetration was also measured for ambient room aerosols. Penetration measured by the TSI 8130 photometric method was lower than the number-based ambient aerosol penetration values. Number-based monodisperse NaCl aerosol penetration measurements showed that the most penetrating particle size was in the 50 nm range for all respirator models tested, with the exception of one model at ~200 nm size. Respirator models containing electrostatic filter media also showed lower penetration values with the TSI 8130 photometric method than the number-based penetration obtained for the most penetrating monodisperse particles. Results suggest that to provide a more challenging respirator filter test method than what is currently used for respirators containing electrostatic media, the test method should utilize a sufficient number

  20. Experience of childhood abuse and later number of remaining teeth in older Japanese: a life-course study from Japan Gerontological Evaluation Study project.

    Science.gov (United States)

    Matsuyama, Yusuke; Fujiwara, Takeo; Aida, Jun; Watt, Richard G; Kondo, Naoki; Yamamoto, Tatsuo; Kondo, Katsunori; Osaka, Ken

    2016-12-01

    From a life-course perspective, adverse childhood experiences (ACEs) such as childhood abuse are known risk factors for adult diseases and death throughout life. ACEs could also cause poor dental health in later life because they could induce poor dental health in childhood, initiate unhealthy behaviors, and lower immune and physiological functions. However, it is not known whether ACEs have a longitudinal adverse effect on dental health in older age. This study aimed to investigate the association between experience of childhood abuse until the age of 18 and current number of remaining teeth among a sample of older Japanese adults. A retrospective cohort study was conducted using the data from the Japan Gerontological Evaluation Study (JAGES), a large-scale, self-reported survey in 2013 including 27 525 community-dwelling Japanese aged ≥65 years (response rate=71.1%). The outcome, current number of remaining teeth was used categorically: ≥20, 10-19, 5-9, 1-4, and no teeth. Childhood abuse was defined as having any experience of physical abuse, psychological abuse, and psychological neglect up until the age of 18 years. Ordered logistic regression models were applied. Of the 25 189 respondents who indicated their number of remaining teeth (mean age: 73.9; male: 46.5%), 14.8% had experience of childhood abuse. Distributions of ≥20, 10-19, 5-9, 1-4, and no teeth were as follows: 46.6%, 22.0%, 11.4%, 8.2%, and 11.8% among respondents with childhood abuse, while 52.3%, 21.3%, 10.3%, 6.6%, and 9.5% among respondents without childhood abuse. Childhood abuse was significantly associated with fewer remaining teeth after adjusting for covariates including socioeconomic status (odds ratio=1.14; 95% confidence interval: 1.06, 1.22). Childhood abuse could have a longitudinal adverse effect on later dental health in older age. This study emphasizes the importance of early life experiences on dental health throughout later life. © 2016 John Wiley & Sons A/S. Published by