WorldWideScience

Sample records for model selection procedures

  1. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... procedure acceptable to the NRC staff for providing summary details of mathematical modeling methods used in... NUCLEAR REGULATORY COMMISSION [NRC-2013-0062] Reporting Procedure for Mathematical Models Selected... Regulatory Guide (RG) 4.4, ``Reporting Procedure for Mathematical Models Selected to Predict Heated Effluent...

  2. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  4. Penalized regression procedures for variable selection in the potential outcomes framework.

    Science.gov (United States)

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L

    2015-05-10

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple 'impute, then select' class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data, and imputation are drawn. A difference least absolute shrinkage and selection operator algorithm is defined, along with its multiple imputation analogs. The procedures are illustrated using a well-known right-heart catheterization dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  6. A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior.

    Science.gov (United States)

    LeBlanc, Linda A; Raetz, Paige B; Sellers, Tyra P; Carr, James E

    2016-03-01

    Practicing behavior analysts frequently assess and treat problem behavior as part of their ongoing job responsibilities. Effective measurement of problem behavior is critical to success in these activities because some measures of problem behavior provide more accurate and complete information about the behavior than others. However, not every measurement procedure is appropriate for every problem behavior and therapeutic circumstance. We summarize the most commonly used measurement procedures, describe the contexts for which they are most appropriate, and propose a clinical decision-making model for selecting measurement produces given certain features of the behavior and constraints of the therapeutic environment.

  7. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  8. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  9. Site selection procedure for high level radioactive waste disposal in Bulgaria

    International Nuclear Information System (INIS)

    Evstatiev, D.; Vachev, B.

    1993-01-01

    A combined site selection approach is implemented. Bulgaria's territory has been classified in three categories, presented on a 1:500000 scale map. The number of suitable sites has been reduced to 20 using the method of successive screening. The formulated site selection problem is a typical discrete multi-criteria decision making problem under uncertainty. A 5-level procedure using Expert Choice Rating and relative models is created. It is a part of a common procedure for evaluation and choice of variants for high level radwaste disposal construction. On this basis 7-8 more preferable sites are demonstrated. A new knowledge and information about the relative importance of the criteria and their subsets, about the level of criteria uncertainty and the reliability are gained. It is very useful for planning and managing of the next final stages of the site selection procedure. 7 figs., 8 refs., 4 suppls. (author)

  10. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    Science.gov (United States)

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Eye bank procedures: donor selection criteria.

    Science.gov (United States)

    Sousa, Sidney Júlio de Faria E; Sousa, Stella Barretto de Faria E

    2018-01-01

    Eye banks use sterile procedures to manipulate the eye, antiseptic measures for ocular surface decontamination, and rigorous criteria for donor selection to minimize the possibility of disease transmission due to corneal grafting. Donor selection focuses on analysis of medical records and specific post-mortem serological tests. To guide and standardize procedures, eye bank associations and government agencies provide lists of absolute and relative contraindications for use of the tissue based on donor health history. These lists are guardians of the Hippocratic principle "primum non nocere." However, each transplantation carries risk of transmission of potentially harmful agents to the recipient. The aim of the procedures is not to eliminate risk, but limit it to a reasonable level. The balance between safety and corneal availability needs to be maintained by exercising prudence without disproportionate rigor.

  12. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  13. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  14. Selection procedures in sports: Improving predictions of athletes’ future performance

    NARCIS (Netherlands)

    den Hartigh, Jan Rudolf; Niessen, Anna; Frencken, Wouter; Meijer, Rob R.

    The selection of athletes has been a central topic in sports sciences for decades. Yet, little consideration has been given to the theoretical underpinnings and predictive validity of the procedures. In this paper, we evaluate current selection procedures in sports given what we know from the

  15. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    Science.gov (United States)

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  16. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  17. The alternative site selection procedure as covered in the report by the Repository Site Selection Procedures Working Group

    International Nuclear Information System (INIS)

    Brenner, M.

    2005-01-01

    The 2002 Act on the Regulated Termination of the Use of Nuclear Power for Industrial Electricity Generation declared Germany's opting out of the peaceful uses of nuclear power. The problem of the permanent management of radioactive residues is becoming more and more important also in the light of that political decision. At the present time, there are no repositories offering the waste management capacities required. Such facilities need to be created. At the present stage, eligible repository sites are the Konrad mine, a former iron ore mine near Salzgitter, and the Gorleben salt dome. While the fate of the Konrad mine as a repository for waste generating negligible amounts of heat continues to be uncertain, despite a plan approval decision of June 2002, the Gorleben repository is still in the planning phase, at present in a dormant state, so to speak. The federal government expressed doubt about the suitability of the Gorleben site. Against this backdrop, the Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety in February 1999 established AkEnd, the Working Group on Repository Site Selection Procedures. The Group was charged with developing, based on sound scientific criteria, a transparent site selection procedure in order to facilitate the search for repository sites. The Working Group presented its final report in December 2002 after approximately four years of work. The Group's proposals about alternative site selection procedures are explained in detail and, above all, reviewed critically. (orig.)

  18. Procedures for Selecting Items for Computerized Adaptive Tests.

    Science.gov (United States)

    Kingsbury, G. Gage; Zara, Anthony R.

    1989-01-01

    Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)

  19. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  20. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  1. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  2. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  3. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  4. Modeling and Experimental Validation of the Electron Beam Selective Melting Process

    Directory of Open Access Journals (Sweden)

    Wentao Yan

    2017-10-01

    Full Text Available Electron beam selective melting (EBSM is a promising additive manufacturing (AM technology. The EBSM process consists of three major procedures: ① spreading a powder layer, ② preheating to slightly sinter the powder, and ③ selectively melting the powder bed. The highly transient multi-physics phenomena involved in these procedures pose a significant challenge for in situ experimental observation and measurement. To advance the understanding of the physical mechanisms in each procedure, we leverage high-fidelity modeling and post-process experiments. The models resemble the actual fabrication procedures, including ① a powder-spreading model using the discrete element method (DEM, ② a phase field (PF model of powder sintering (solid-state sintering, and ③ a powder-melting (liquid-state sintering model using the finite volume method (FVM. Comprehensive insights into all the major procedures are provided, which have rarely been reported. Preliminary simulation results (including powder particle packing within the powder bed, sintering neck formation between particles, and single-track defects agree qualitatively with experiments, demonstrating the ability to understand the mechanisms and to guide the design and optimization of the experimental setup and manufacturing process.

  5. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  6. Establishment of selected acute pulmonary thromboembolism model in experimental sheep

    International Nuclear Information System (INIS)

    Fan Jihai; Gu Xiulian; Chao Shengwu; Zhang Peng; Fan Ruilin; Wang Li'na; Wang Lulu; Wang Ling; Li Bo; Chen Taotao

    2010-01-01

    Objective: To establish a selected acute pulmonary thromboembolism model in experimental sheep suitable for animal experiment. Methods: By using Seldinger's technique the catheter sheath was placed in both the femoral vein and femoral artery in ten sheep. Under C-arm DSA guidance the catheter was inserted through the catheter sheath into the pulmonary artery. Via the catheter appropriate amount of sheep autologous blood clots was injected into the selected pulmonary arteries. The selected acute pulmonary thromboembolism model was thus established. Pulmonary angiography was performed to check the results. The pulmonary arterial pressure, femoral artery pressure,heart rates and partial pressure of oxygen in arterial blood (PaO 2 ) were determined both before and after the treatment. The above parameters obtained after the procedure were compared with the recorded parameters measured before the procedure, and the sheep model quality was evaluated. Results: The baseline of pulmonary arterial pressure was (27.30 ± 9.58) mmHg,femoral artery pressure was (126.4 ± 13.72) mmHg, heart rate was (103 ± 15) bpm and PaO 2 was (87.7 ± 12.04) mmHg. Sixty minutes after the injection of (30 ± 5) ml thrombotic agglomerates, the pulmonary arterial pressures rose to (52 ± 49) mmHg, femoral artery pressures dropped to (100 ± 21) mmHg. The heart rates went up to (150 ± 26) bpm. The PaO 2 fell to (25.3 ± 11.2) mmHg. After the procedure the above parameters were significantly different from that measured before the procedure in all ten animals (P < 0.01). The pulmonary arteriography clearly demonstrated that the selected pulmonary arteries were successfully embolized. Conclusion: The anatomy of sheep's femoral veins,vena cava system, pulmonary artery and right heart system are suitable for the establishment of the catheter passage, for this reason, selected acute pulmonary thromboembolism model can be easily created in experimental sheep. The technique is feasible and the model

  7. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  8. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  9. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  10. Model selection for contingency tables with algebraic statistics

    NARCIS (Netherlands)

    Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.

    2009-01-01

    Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of

  11. Augmented Self-Modeling as a Treatment for Children with Selective Mutism.

    Science.gov (United States)

    Kehle, Thomas J.; Madaus, Melissa R.; Baratta, Victoria S.; Bray, Melissa A.

    1998-01-01

    Describes the treatment of three children experiencing selective mutism. The procedure utilized incorporated self-modeling, mystery motivators, self-reinforcement, stimulus fading, spacing, and antidepressant medication. All three children evidenced a complete cessation of selective mutism and maintained their treatment gains at follow-up.…

  12. Selectivity assessment of an arsenic sequential extraction procedure for evaluating mobility in mine wastes

    International Nuclear Information System (INIS)

    Drahota, Petr; Grösslová, Zuzana; Kindlová, Helena

    2014-01-01

    Highlights: • Extraction efficiency and selectivity of phosphate and oxalate were tested. • Pure As-bearing mineral phases and mine wastes were used. • The reagents were found to be specific and selective for most major forms of As. • An optimized sequential extraction scheme for mine wastes has been developed. • It has been tested over a model mineral mixtures and natural mine waste materials. - Abstract: An optimized sequential extraction (SE) scheme for mine waste materials has been developed and tested for As partitioning over a range of pure As-bearing mineral phases, their model mixtures, and natural mine waste materials. This optimized SE procedure employs five extraction steps: (1) nitrogen-purged deionized water, 10 h; (2) 0.01 M NH 4 H 2 PO 4 , 16 h; (3) 0.2 M NH 4 -oxalate in the dark, pH3, 2 h; (4) 0.2 M NH 4 -oxalate, pH3/80 °C, 4 h; (5) KClO 3 /HCl/HNO 3 digestion. Selectivity and specificity tests on natural mine wastes and major pure As-bearing mineral phases showed that these As fractions appear to be primarily associated with: (1) readily soluble; (2) adsorbed; (3) amorphous and poorly-crystalline arsenates, oxides and hydroxosulfates of Fe; (4) well-crystalline arsenates, oxides, and hydroxosulfates of Fe; as well as (5) sulfides and arsenides. The specificity and selectivity of extractants, and the reproducibility of the optimized SE procedure were further verified by artificial model mineral mixtures and different natural mine waste materials. Partitioning data for extraction steps 3, 4, and 5 showed good agreement with those calculated in the model mineral mixtures (<15% difference), as well as that expected in different natural mine waste materials. The sum of the As recovered in the different extractant pools was not significantly different (89–112%) than the results for acid digestion. This suggests that the optimized SE scheme can reliably be employed for As partitioning in mine waste materials

  13. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  14. A Procedure for Modeling Photovoltaic Arrays under Any Configuration and Shading Conditions

    Directory of Open Access Journals (Sweden)

    Daniel Gonzalez Montoya

    2018-03-01

    Full Text Available Photovoltaic (PV arrays can be connected following regular or irregular connection patterns to form regular configurations (e.g., series-parallel, total cross-tied, bridge-linked, etc. or irregular configurations, respectively. Several reported works propose models for a single configuration; hence, making the evaluation of arrays with different configuration is a considerable time-consuming task. Moreover, if the PV array adopts an irregular configuration, the classical models cannot be used for its analysis. This paper proposes a modeling procedure for PV arrays connected in any configuration and operating under uniform or partial shading conditions. The procedure divides the array into smaller arrays, named sub-arrays, which can be independently solved. The modeling procedure selects the mesh current solution or the node voltage solution depending on the topology of each sub-array. Therefore, the proposed approach analyzes the PV array using the least number of nonlinear equations. The proposed solution is validated through simulation and experimental results, which demonstrate the proposed model capacity to reproduce the electrical behavior of PV arrays connected in any configuration.

  15. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  16. Procedural Modeling for Digital Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Simon Haegler

    2009-01-01

    Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.

  17. Doubly sparse factor models for unifying feature transformation and feature selection

    International Nuclear Information System (INIS)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko

    2010-01-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  18. Doubly sparse factor models for unifying feature transformation and feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)

    2010-06-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  19. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd)

    International Nuclear Information System (INIS)

    Brenner, M.

    2005-01-01

    The paper discusses the results of the report of the study group on the radioactive waste final repository selection process with respect to the alternative site selection procedure. Key points of the report are the long-term safety, the alternativity of sites and the concept of one repository. The critique on this report is focussed on the topics site selection and licensing procedures, civil participation, the factor time and the question of cost

  20. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  1. Applicant Personality and Procedural Justice Perceptions of Group Selection Interviews.

    Science.gov (United States)

    Bye, Hege H; Sandal, Gro M

    2016-01-01

    We investigated how job applicants' personalities influence perceptions of the structural and social procedural justice of group selection interviews (i.e., a group of several applicants being evaluated simultaneously). We especially addressed trait interactions between neuroticism and extraversion (the affective plane) and extraversion and agreeableness (the interpersonal plane). Data on personality (pre-interview) and justice perceptions (post-interview) were collected in a field study among job applicants ( N  = 97) attending group selection interviews for positions as teachers in a Norwegian high school. Interaction effects in hierarchical regression analyses showed that perceptions of social and structural justice increased with levels of extraversion among high scorers on neuroticism. Among emotionally stable applicants, however, being introverted or extraverted did not matter to justice perceptions. Extraversion did not impact on the perception of social justice for applicants low in agreeableness. Agreeable applicants, however, experienced the group interview as more socially fair when they were also extraverted. The impact of applicant personality on justice perceptions may be underestimated if traits interactions are not considered. Procedural fairness ratings for the group selection interview were high, contrary to the negative reactions predicted by other researchers. There was no indication that applicants with desirable traits (i.e., traits predictive of job performance) reacted negatively to this selection tool. Despite the widespread use of interviews in selection, previous studies of applicant personality and fairness reactions have not included interviews. The study demonstrates the importance of previously ignored trait interactions in understanding applicant reactions.

  2. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  3. A materials selection procedure for sandwiched beams via parametric optimization with applications in automotive industry

    International Nuclear Information System (INIS)

    Aly, Mohamed F.; Hamza, Karim T.; Farag, Mahmoud M.

    2014-01-01

    Highlights: • Sandwich panels optimization model. • Sandwich panels design procedure. • Study of sandwich panels for automotive vehicle flooring. • Study of sandwich panels for truck cabin exterior. - Abstract: The future of automotive industry faces many challenges in meeting increasingly strict restrictions on emissions, energy usage and recyclability of components alongside the need to maintain cost competiveness. Weight reduction through innovative design of components and proper material selection can have profound impact towards attaining such goals since most of the lifecycle energy usage occurs during the operation phase of a vehicle. In electric and hybrid vehicles, weight reduction has another important effect of extending the electric mode driving range between stops or gasoline mode. This paper adopts parametric models for design optimization and material selection of sandwich panels with the objective of weight and cost minimization subject to structural integrity constraints such as strength, stiffness and buckling resistance. The proposed design procedure employs a pre-compiled library of candidate sandwich panel material combinations, for which optimization of the layered thicknesses is conducted and the best one is reported. Example demonstration studies from the automotive industry are presented for the replacement of Aluminum and Steel panels with polypropylene-filled sandwich panel alternatives

  4. Site selection under the underground geologic store plan. Procedures of selecting underground geologic stores as disputed by society, science, and politics. Site selection rules

    International Nuclear Information System (INIS)

    Aebersold, M.

    2008-01-01

    The new Nuclear Power Act and the Nuclear Power Ordinance of 2005 are used in Switzerland to select a site of an underground geologic store for radioactive waste in a substantive planning procedure. The ''Underground Geologic Store Substantive Plan'' is to ensure the possibility to build underground geologic stores in an independent, transparent and fair procedure. The Federal Office for Energy (BFE) is the agency responsible for this procedure. The ''Underground Geologic Store'' Substantive Plan comprises these principles: - The long term protection of people and the environment enjoys priority. Aspects of regional planning, economics and society are of secondary importance. - Site selection is based on the waste volumes arising from the five nuclear power plants currently existing in Switzerland. The Substantive Plan is no precedent for or against future nuclear power plants. - A transparent and fair procedure is an indispensable prerequisite for achieving the objectives of a Substantive Plan, i.e., finding accepted sites for underground geologic stores. The Underground Geologic Stores Substantive Plan is arranged in two parts, a conceptual part defining the rules of the selection process, and an implementation part documenting the selection process step by step and, in the end, naming specific sites of underground geologic stores in Switzerland. The objective is to be able to commission underground geologic stores in 25 or 35 years' time. In principle, 2 sites are envisaged, one for low and intermediate level waste, and one for high level waste. The Swiss Federal Council approved the conceptual part on April 2, 2008. This marks the beginning of the implementation phase and the site selection process proper. (orig.)

  5. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  6. Weighted overlap dominance – a procedure for interactive selection on multidimensional interval data

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Nielsen, Kurt

    2011-01-01

    We present an outranking procedure that supports selection of alternatives represented by multiple attributes with interval valued data. The procedure is interactive in the sense that the decision maker directs the search for preferred alternatives by providing weights of the different attributes...

  7. A procedure for building product models

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2001-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes, which are to be supported...... with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...

  8. A Rapid Selection Procedure for Simple Commercial Implementation of omega-Transaminase Reactions

    DEFF Research Database (Denmark)

    Gundersen Deslauriers, Maria; Tufvesson, Pär; Rackham, Emma J.

    2016-01-01

    A stepwise selection procedure is presented to quickly evaluate whether a given omega-transaminase reaction is suitable for a so-called "simple" scale-up for fast industrial implementation. Here "simple" is defined as a system without the need for extensive process development or specialized......, and (3) determination of product inhibition. The method is exemplified with experimental work focused on two products: 1-(4-bromophenyl)ethylamine and (S)-(+)3-amino-1-Boc-piperidine, synthesized from their corresponding pro-chiral ketones each with two alternative amine donors, propan-2-amine, and 1......-phenylethylamine. Each step of the method has a threshold value, which must be surpassed to allow "simple" implementation, helping select suitable combinations of substrates, enzymes, and donors. One reaction pair, 1-Boc-3-piperidone with propan-2-amine, met the criteria of the three-step selection procedure...

  9. ILK statement on the recommendations by the working group on procedures for the selection of repository sites

    International Nuclear Information System (INIS)

    Anon.

    2003-01-01

    The Working Group on Procedures for the Selection of Repository Sites (AkEnd) had been appointed by the German Federal Ministry for the Environment (BMU) to develop procedures and criteria for the search for, and selection of, a repository site for all kinds of radioactive waste in deep geologic formations in Germany. ILK in principle welcomes the attempt on the part of AkEnd to develop a systematic procedure. On the other hand, ILK considers the two constraints imposed by BMU inappropriate: AkEnd was not to take into account the two existing sites of Konrad and Gorleben and, instead, work from a so-called white map of Germany. ILK recommends to perform a comprehensive safety analysis of Gorleben and define a selection procedure including the facts about Gorleben and, in addition, to commission the Konrad repository as soon as possible. The one-repository concept established as a precondition by BMU greatly restricts the selection procedure. There are no technical or scientific reasons for such concept. ILK recommends to plan for separate repositories, which would also correspond to international practice. The geoscientific criteria proposed by AkEnd should be examined and revised. With respect to the site selection procedure proposed, ILK feels that procedure is unable to define a targeted approach. Great importance must be attributed to public participation. The final site selection must be made under the responsibility of the government or the parliament. (orig.) [de

  10. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  11. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  12. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  13. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  14. A Heckman selection model for the safety analysis of signalized intersections.

    Directory of Open Access Journals (Sweden)

    Xuecai Xu

    Full Text Available The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously.This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI, respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years.The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels.A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections.

  15. Objective ARX Model Order Selection for Multi-Channel Human Operator Identification

    NARCIS (Netherlands)

    Roggenkämper, N; Pool, D.M.; Drop, F.M.; van Paassen, M.M.; Mulder, M.

    2016-01-01

    In manual control, the human operator primarily responds to visual inputs but may elect to make use of other available feedback paths such as physical motion, adopting a multi-channel control strategy. Hu- man operator identification procedures generally require a priori selection of the model

  16. A Survey on Procedural Modelling for Virtual Worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.

    2014-01-01

    Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for

  17. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  18. Selecting aesthetic gynecologic procedures for plastic surgeons: a review of target methodology.

    Science.gov (United States)

    Ostrzenski, Adam

    2013-04-01

    The objective of this article was to assist cosmetic-plastic surgeons in selecting aesthetic cosmetic gynecologic-plastic surgical interventions. Target methodological analyses of pertinent evidence-based scientific papers and anecdotal information linked to surgical techniques for cosmetic-plastic female external genitalia were examined. A search of the existing literature from 1900 through June 2011 was performed by utilizing electronic and manual databases. A total of 87 articles related to cosmetic-plastic gynecologic surgeries were identified in peer-review journals. Anecdotal information was identified in three sources (Barwijuk, Obstet Gynecol J 9(3):2178-2179, 2011; Benson, 5th annual congress on aesthetic vaginal surgery, Tucson, AZ, USA, November 14-15, 2010; Scheinberg, Obstet Gynecol J 9(3):2191, 2011). Among those articles on cosmetic-plastic gynecologic surgical technique that were reviewed, three articles met the criteria for evidence-based medicine level II, one article was level II-1 and two papers were level II-2. The remaining papers were classified as level III. The pertinent 25 papers met the inclusion criteria and were analyzed. There was no documentation on the safety and effectiveness of cosmetic-plastic gynecologic procedures in the scientific literature. All published surgical interventions are not suitable for a cosmetic-plastic practice. The absence of documentation on safety and effectiveness related to cosmetic-plastic gynecologic procedures prevents the establishment of a standard of practice. Traditional gynecologic surgical procedures cannot be labeled and used as cosmetic-plastic procedures, it is a deceptive practice. Obtaining legal trademarks on traditional gynecologic procedures and creating a business model that tries to control clinical-scientific knowledge dissemination is unethical. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings

  19. Selecting locations for landing of various formations of helicopters using spatial modelling

    International Nuclear Information System (INIS)

    Kovarik, V; Rybansky, M

    2014-01-01

    During crisis situations such as floods, landslides, humanitarian crisis and even military clashes there are situations when it is necessary to send helicopters to the crisis areas. To facilitate the process of searching for the sites suitable for landing, it is possible to use the tools of spatial modelling. The paper describes a procedure of selecting areas potentially suitable for landing of particular formations of helicopters. It lists natural and man-made terrain features that represent the obstacles that can prevent helicopters from landing. It also states specific requirements of the NATO documents that have to be respected when selecting the areas for landing. These requirements relate to a slope of ground and an obstruction angle on approach and exit paths. Creating the knowledge base and graphical models in ERDAS IMAGINE is then described. In the first step of the procedure the areas generally suitable for landing are selected. Then the different configurations of landing points that form the landing sites are created and corresponding outputs are generated. Finally, several tactical requirements are incorporated

  20. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  1. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  2. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  3. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  4. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false What procedures apply to the selection of programs... Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL SCIENCE FOUNDATION PROGRAMS AND ACTIVITIES § 660.6 What procedures apply to the selection of programs and activities...

  5. Procedures for the selection of stopping power ratios for electron beams: Comparison of IAEA TRS procedures and of DIN procedures with Monte Carlo results

    International Nuclear Information System (INIS)

    Roos, M.; Christ, G.

    2000-01-01

    In the International Code of Practice IAEA TRS-381 the stopping power ratios water/air are selected according to the half-value depth and the depth of measurement. In the German Standard DIN 6800-2 a different procedure is recommended, which, in addition, takes the practical electron range into account; the stopping power data for monoenergetic beams from IAEA TRS-381 are used. Both procedures are compared with recent Monte Carlo calculations carried out for various beams of clinical accelerators. It is found that the DIN procedure shows a slightly better agreement. In addition, the stopping power ratios in IAEA TRS-381 are compared with those in DIN 6800-2 for the reference conditions of the beams from the PTB linac; the maximum deviation is not larger than 0.6%. (author)

  6. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  7. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Corbalan, Gemma; Kester, Liesbeth; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2011-01-01

    Taminiau, E. M. C., Corbalan, G., Kester, L., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, March). Procedural advice on self-assessment and task selection in learner-controlled education. Presentation at the ICO Springschool, Niederalteich, Germany.

  8. The alternative site selection procedure as covered in the report by the Repository Site Selection Procedures Working Group; Das Verfahren der alternativen Standortsuche im Bericht des Arbeitskreises Auswahlverfahren Endlagerstandorte

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, M. [Jena Univ. (Germany). Juristische Fakultaet

    2005-01-01

    The 2002 Act on the Regulated Termination of the Use of Nuclear Power for Industrial Electricity Generation declared Germany's opting out of the peaceful uses of nuclear power. The problem of the permanent management of radioactive residues is becoming more and more important also in the light of that political decision. At the present time, there are no repositories offering the waste management capacities required. Such facilities need to be created. At the present stage, eligible repository sites are the Konrad mine, a former iron ore mine near Salzgitter, and the Gorleben salt dome. While the fate of the Konrad mine as a repository for waste generating negligible amounts of heat continues to be uncertain, despite a plan approval decision of June 2002, the Gorleben repository is still in the planning phase, at present in a dormant state, so to speak. The federal government expressed doubt about the suitability of the Gorleben site. Against this backdrop, the Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety in February 1999 established AkEnd, the Working Group on Repository Site Selection Procedures. The Group was charged with developing, based on sound scientific criteria, a transparent site selection procedure in order to facilitate the search for repository sites. The Working Group presented its final report in December 2002 after approximately four years of work. The Group's proposals about alternative site selection procedures are explained in detail and, above all, reviewed critically. (orig.)

  9. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  10. Recruitment and Selection of Foreign Professionals In the South African Job Market: Procedures and Processes

    Directory of Open Access Journals (Sweden)

    Chao Nkhungulu Mulenga

    2007-07-01

    Full Text Available This study investigated procedures and processes used in the selection of prospective foreign applicants by recruitment agencies in South Africa. An electronic survey was distributed to the accessible population of 244 agencies on a national employment website, yielding 57 respondents. The results indicate that the recruitment industry does not have standard, well articulated procedures for identifying and selecting prospective foreign employees and considered processing foreign applicants difficult. Difficulties with the Department of Home Affairs were a major hindrance to recruiting foreign applicants.

  11. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  12. Target-matched insertion gain derived from three different hearing aid selection procedures.

    Science.gov (United States)

    Punch, J L; Shovels, A H; Dickinson, W W; Calder, J H; Snead, C

    1995-11-01

    Three hearing aid selection procedures were compared to determine if any one was superior in producing prescribed real-ear insertion gain. For each of three subject groups, 12 in-the-ear style hearing aids with Class D circuitry and similar dispenser controls were ordered from one of three manufacturers. Subject groups were classified based on the type of information included on the hearing aid order form: (1) the subject's audiogram, (2) a three-part matrix specifying the desired maximum output, full-on gain, and frequency response slope of the hearing aid, or (3) the desired 2-cc coupler full-in grain of the hearing aid, based on real-ear coupler difference (RECD) measurements. Following electroacoustic adjustments aimed at approximating a commonly used target insertion gain formula, results revealed no significant differences among any of the three selection procedures with respect to obtaining acceptable insertion gain values.

  13. Generalizability of a composite student selection procedure at a university-based chiropractic program

    DEFF Research Database (Denmark)

    O'Neill, Lotte Dyhrberg; Korsholm, Lars; Wallstedt, Birgitta

    2009-01-01

    , rater and residual effects were estimated for a mixed model with the restricted maximum likelihood method. The reliability of obtained applicant ranks (generalizability coefficients) was calculated for the individual admission criteria and for the composite admission procedure. RESULTS: Very good......PURPOSE: Non-cognitive admission criteria are typically used in chiropractic student selection to supplement grades. The reliability of non-cognitive student admission criteria in chiropractic education has not previously been examined. In addition, very few studies have examined the overall test...... test, and an admission interview. METHODS: Data from 105 Chiropractic applicants from the 2007 admission at the University of Southern Denmark were available for analysis. Each admission parameter was double scored using two random, blinded, and independent raters. Variance components for applicant...

  14. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    Science.gov (United States)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  15. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  16. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Kester, Liesbeth; Corbalan, Gemma; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2010-01-01

    Taminiau, E. M. C., Kester, L., Corbalan, G., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, July). Procedural advice on self-assessment and task selection in learner-controlled education. Paper presented at the Junior Researchers of EARLI Conference 2010, Frankfurt, Germany.

  17. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  18. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd)

    International Nuclear Information System (INIS)

    Nies, A.

    2005-01-01

    The study group on the selection procedures of radioactive waste final repository sites has presented the report in December 2002. The author dicusses the consequences of this report with respect to the site selection focussing on two topics: the serach for the best possible site and the prevention of prejudices

  19. Does excellence have a gender? A national research on recruitment and selection procedures for professional appointments in the Netherlands

    NARCIS (Netherlands)

    Brink, M.C.L. van den; Brouns, M.L.M.; Waslander, S.

    2006-01-01

    Purpose – The purpose of this research is to show that upward mobility of female academics in regular selection procedures is evolving extremely slowly, especially in The Netherlands. This paper aims at a more profound understanding of professorial recruitment and selection procedures in relation to

  20. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  1. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  2. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  3. Procedure for Selection of Suitable Resources in Interactions in Complex Dynamic Systems Using Artificial Immunity

    Directory of Open Access Journals (Sweden)

    Naors Y. anadalsaleem

    2017-03-01

    Full Text Available The dynamic optimization procedure for -dimensional vector function of a system, the state of which is interpreted as adaptable immune cell, is considered Using the results of the theory of artificial immune systems. The procedures for estimate of monitoring results are discussed. The procedure for assessing the entropy is recommended as a general recursive estimation algorithm. The results are focused on solving the optimization problems of cognitive selection of suitable physical resources, what expands the scope of Electromagnetic compatibility.

  4. Procedure to select test organisms for environmental risk assessment of genetically modified crops in aquatic systems.

    Science.gov (United States)

    Hilbeck, Angelika; Bundschuh, Rebecca; Bundschuh, Mirco; Hofmann, Frieder; Oehen, Bernadette; Otto, Mathias; Schulz, Ralf; Trtikova, Miluse

    2017-11-01

    For a long time, the environmental risk assessment (ERA) of genetically modified (GM) crops focused mainly on terrestrial ecosystems. This changed when it was scientifically established that aquatic ecosystems are exposed to GM crop residues that may negatively affect aquatic species. To assist the risk assessment process, we present a tool to identify ecologically relevant species usable in tiered testing prior to authorization or for biological monitoring in the field. The tool is derived from a selection procedure for terrestrial ecosystems with substantial but necessary changes to adequately consider the differences in the type of ecosystems. By using available information from the Water Framework Directive (2000/60/EC), the procedure can draw upon existing biological data on aquatic systems. The proposed procedure for aquatic ecosystems was tested for the first time during an expert workshop in 2013, using the cultivation of Bacillus thuringiensis (Bt) maize as the GM crop and 1 stream type as the receiving environment in the model system. During this workshop, species executing important ecological functions in aquatic environments were identified in a stepwise procedure according to predefined ecological criteria. By doing so, we demonstrated that the procedure is practicable with regard to its goal: From the initial long list of 141 potentially exposed aquatic species, 7 species and 1 genus were identified as the most suitable candidates for nontarget testing programs. Integr Environ Assess Manag 2017;13:974-979. © 2017 SETAC. © 2017 SETAC.

  5. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  6. Endovascular repair of abdominal aortic aneurysms: vascular anatomy, device selection, procedure, and procedure-specific complications.

    Science.gov (United States)

    Bryce, Yolanda; Rogoff, Philip; Romanelli, Donald; Reichle, Ralph

    2015-01-01

    Abdominal aortic aneurysm (AAA) is abnormal dilatation of the aorta, carrying a substantial risk of rupture and thereby marked risk of death. Open repair of AAA involves lengthy surgery time, anesthesia, and substantial recovery time. Endovascular aneurysm repair (EVAR) provides a safer option for patients with advanced age and pulmonary, cardiac, and renal dysfunction. Successful endovascular repair of AAA depends on correct selection of patients (on the basis of their vascular anatomy), choice of the correct endoprosthesis, and familiarity with the technique and procedure-specific complications. The type of aneurysm is defined by its location with respect to the renal arteries, whether it is a true or false aneurysm, and whether the common iliac arteries are involved. Vascular anatomy can be divided more technically into aortic neck, aortic aneurysm, pelvic perfusion, and iliac morphology, with grades of difficulty with respect to EVAR, aortic neck morphology being the most common factor to affect EVAR appropriateness. When choosing among the devices available on the market, one must consider the patient's vascular anatomy and choose between devices that provide suprarenal fixation versus those that provide infrarenal fixation. A successful technique can be divided into preprocedural imaging, ancillary procedures before AAA stent-graft placement, the procedure itself, postprocedural medical therapy, and postprocedural imaging surveillance. Imaging surveillance is important in assessing complications such as limb thrombosis, endoleaks, graft migration, enlargement of the aneurysm sac, and rupture. Last, one must consider the issue of radiation safety with regard to EVAR. (©)RSNA, 2015.

  7. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  8. 5 CFR 335.106 - Special selection procedures for certain veterans under merit promotion.

    Science.gov (United States)

    2010-01-01

    ... veterans under merit promotion. 335.106 Section 335.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PROMOTION AND INTERNAL PLACEMENT General Provisions § 335.106 Special selection procedures for certain veterans under merit promotion. Preference eligibles or veterans who have...

  9. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  10. Model checking as an aid to procedure design

    International Nuclear Information System (INIS)

    Zhang, Wenhu

    2001-01-01

    The OECD Halden Reactor Project has been actively working on computer assisted operating procedures for many years. The objective of the research has been to provide computerised assistance for procedure design, verification and validation, implementation and maintenance. For the verification purpose, the application of formal methods has been considered in several reports. The recent formal verification activity conducted at the Halden Project is based on using model checking to the verification of procedures. This report presents verification approaches based on different model checking techniques and tools for the formalization and verification of operating procedures. Possible problems and relative merits of the different approaches are discussed. A case study of one of the approaches is presented to show the practical application of formal verification. Application of formal verification in the traditional procedure design process can reduce the human resources involved in reviews and simulations, and hence reduce the cost of verification and validation. A discussion of the integration of the formal verification with the traditional procedure design process is given at the end of this report. (Author)

  11. 49 CFR 542.2 - Procedures for selecting low theft light duty truck lines with a majority of major parts...

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Procedures for selecting low theft light duty... TRUCK LINES TO BE COVERED BY THE THEFT PREVENTION STANDARD § 542.2 Procedures for selecting low theft... a low theft rate have major parts interchangeable with a majority of the covered major parts of a...

  12. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  13. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  14. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  15. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  16. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  17. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  18. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  19. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  20. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  1. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  2. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  3. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Science.gov (United States)

    2010-07-01

    ... EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... techniques contemplated by these guidelines usually should be followed if technically feasible. Where the...

  4. Single center experience in selecting the laparoscopic Frey procedure for chronic pancreatitis.

    Science.gov (United States)

    Tan, Chun-Lu; Zhang, Hao; Li, Ke-Zhou

    2015-11-28

    To share our experience regarding the laparoscopic Frey procedure for chronic pancreatitis (CP) and patient selection. All consecutive patients undergoing duodenum-preserving pancreatic head resection from July 2013 to July 2014 were reviewed and those undergoing the Frey procedure for CP were included in this study. Data on age, gender, body mass index (BMI), American Society of Anesthesiologists score, imaging findings, inflammatory index (white blood cells, interleukin (IL)-6, and C-reaction protein), visual analogue score score during hospitalization and outpatient visit, history of CP, operative time, estimated blood loss, and postoperative data (postoperative mortality and morbidity, postoperative length of hospital stay) were obtained for patients undergoing laparoscopic surgery. The open surgery cases in this study were analyzed for risk factors related to extensive bleeding, which was the major reason for conversion during the laparoscopic procedure. Age, gender, etiology, imaging findings, amylase level, complications due to pancreatitis, functional insufficiency, and history of CP were assessed in these patients. Nine laparoscopic and 37 open Frey procedures were analyzed. Of the 46 patients, 39 were male (85%) and seven were female (16%). The etiology of CP was alcohol in 32 patients (70%) and idiopathic in 14 patients (30%). Stones were found in 38 patients (83%). An inflammatory mass was found in five patients (11%). The time from diagnosis of CP to the Frey procedure was 39 ± 19 (9-85) mo. The BMI of patients in the laparoscopic group was 20.4 ± 1.7 (17.8-22.4) kg/m(2) and was 20.6 ± 2.9 (15.4-27.7) kg/m(2) in the open group. All patients required analgesic medication for abdominal pain. Frequent acute pancreatitis or severe abdominal pain due to acute exacerbation occurred in 20 patients (43%). Pre-operative complications due to pancreatitis were observed in 18 patients (39%). Pancreatic functional insufficiency was observed in 14 patients (30

  5. Rejecting escape events in large volume Ge detectors by a pulse shape selection procedure

    International Nuclear Information System (INIS)

    Del Zoppo, A.; Agodi, C.; Alba, R.; Bellia, G.; Coniglione, R.; Loukachine, K.; Maiolino, C.; Migneco, E.; Piattelli, P.; Santonocito, D.; Sapienza, P.

    1993-01-01

    The dependence of the response to γ-rays of a large volume Ge detector on the interval width of a selected initial rise pulse slope is investigated. The number of escape events associated with a small pulse slope is found to be greater than the corresponding number of full energy events. An escape event rejection procedure based on the observed correlation between energy deposition and pulse shape is discussed. Such a procedure seems particularly suited for the design of highly granular large volume Ge detector arrays. (orig.)

  6. Occupational exposures from selected interventional radiological procedures

    International Nuclear Information System (INIS)

    Janeczek, J.; Beal, A.; James, D.

    2001-01-01

    The number of radiology and cardiology interventional procedures has significantly increased in recent years due to better diagnostic equipment resulting in an increase in radiation dose to the staff and patients. The assessment of staff doses was performed for cardiac catheterization and for three other non-cardiac procedures. The scattered radiation distribution resulting from the cardiac catheterization procedure was measured prior to the staff dose measurements. Staff dose measurements included those of the left shoulder, eye, thyroid and hand doses of the cardiologist. In non-cardiac procedures doses to the hands of the radiologist were measured for nephrostomy, fistulogram and percutaneous transluminal angioplasty procedures. Doses to the radiologist or cardiologist were found to be relatively high if correct protection was not observed. (author)

  7. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the

  8. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    Science.gov (United States)

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  9. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  10. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  11. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  12. Model selection in periodic autoregressions

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1994-01-01

    textabstractThis paper focuses on the issue of period autoagressive time series models (PAR) selection in practice. One aspect of model selection is the choice for the appropriate PAR order. This can be of interest for the valuation of economic models. Further, the appropriate PAR order is important

  13. Pareto genealogies arising from a Poisson branching evolution model with selection.

    Science.gov (United States)

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  14. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Evaluation of 'out-of-specification' CliniMACS CD34-selection procedures of hematopoietic progenitor cell-apheresis products.

    NARCIS (Netherlands)

    Braakman, E.; Schuurhuis, G.J.; Preijers, F.W.M.B.; Voermans, C.; Theunissen, K.; Riet, I. van; Fibbe, W.E.; Slaper-Cortenbach, I.C.M.

    2008-01-01

    BACKGROUND: Immunomagnetic selection of CD34(+) hematopoietic progenitor cells (HPC) using CliniMACS CD34 selection technology is widely used to provide high-purity HPC grafts. However, the number of nucleated cells and CD34+ cells recommended by the manufacturer for processing in a single procedure

  16. Evaluation of 'out-of-specification' CliniMACS CD34-selection procedures of hematopoietic progenitor cell-apheresis products

    NARCIS (Netherlands)

    Braakman, E.; Schuurhuis, G. J.; Preijers, F. W. M. B.; Voermans, C.; Theunissen, K.; van Riet, I.; Fibbe, W. E.; Slaper-Cortenbach, I.

    2008-01-01

    BACKGROUND: Immunomagnetic selection of CD34(+) hematopoietic progenitor cells (HPC) using CliniMACS CD34 selection technology is widely used to provide high-purity HPC grafts. However, the number of nucleated cells and CD34+ cells recommended by the manufacturer for processing in a single procedure

  17. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  18. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  19. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  20. A skin abscess model for teaching incision and drainage procedures.

    Science.gov (United States)

    Fitch, Michael T; Manthey, David E; McGinnis, Henderson D; Nicks, Bret A; Pariyadath, Manoj

    2008-07-03

    Skin and soft tissue infections are increasingly prevalent clinical problems, and it is important for health care practitioners to be well trained in how to treat skin abscesses. A realistic model of abscess incision and drainage will allow trainees to learn and practice this basic physician procedure. We developed a realistic model of skin abscess formation to demonstrate the technique of incision and drainage for educational purposes. The creation of this model is described in detail in this report. This model has been successfully used to develop and disseminate a multimedia video production for teaching this medical procedure. Clinical faculty and resident physicians find this model to be a realistic method for demonstrating abscess incision and drainage. This manuscript provides a detailed description of our model of abscess incision and drainage for medical education. Clinical educators can incorporate this model into skills labs or demonstrations for teaching this basic procedure.

  1. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  2. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  3. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles......How can player models and artificially intelligent (AI) agents be useful in early-stage iterative game and simulation design? One answer may be as ways of generating synthetic play-test data, before a game or level has ever seen a player, or when the sampled amount of play test data is very low....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...

  4. Radiation load of the extremities and eye lenses of the staff during selected interventional radiology procedures

    International Nuclear Information System (INIS)

    Nikodemova, Denisa; Trosanova, Dominika

    2010-01-01

    The Slovak Medical University in Bratislava is involved in the ORAMED (Optimization of Radiation Protection for Medical Staff) research project, aimed at developing a unified methodology for a more accurate assessment of professional exposure of interventional radiology staff, with focus on extremity and eye lens dosimetry in selected procedures. Three cardiac procedures and 5 angiography examinations were selected: all technical parameters were monitored and the dose equivalent levels were measured by TL dosimetry at 9 anatomic sites of the body. Preliminary results were obtained for the radiation burden of the eyes and extremities during digital subtraction angiography of the lower limbs, collected from 7 hospital departments in partner EU states. Correlations between the evaluated data and the influence of some parameters are shown

  5. Procedural Skills Education – Colonoscopy as a Model

    Directory of Open Access Journals (Sweden)

    Maitreyi Raman

    2008-01-01

    Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.

  6. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  7. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  8. A Quality Function Deployment-Based Model for Cutting Fluid Selection

    Directory of Open Access Journals (Sweden)

    Kanika Prasad

    2016-01-01

    Full Text Available Cutting fluid is applied for numerous reasons while machining a workpiece, like increasing tool life, minimizing workpiece thermal deformation, enhancing surface finish, flushing away chips from cutting surface, and so on. Hence, choosing a proper cutting fluid for a specific machining application becomes important for enhanced efficiency and effectiveness of a manufacturing process. Cutting fluid selection is a complex procedure as the decision depends on many complicated interactions, including work material’s machinability, rigorousness of operation, cutting tool material, metallurgical, chemical, and human compatibility, reliability and stability of fluid, and cost. In this paper, a decision making model is developed based on quality function deployment technique with a view to respond to the complex character of cutting fluid selection problem and facilitate judicious selection of cutting fluid from a comprehensive list of available alternatives. In the first example, HD-CUTSOL is recognized as the most suitable cutting fluid for drilling holes in titanium alloy with tungsten carbide tool and in the second example, for performing honing operation on stainless steel alloy with cubic boron nitride tool, CF5 emerges out as the best honing fluid. Implementation of this model would result in cost reduction through decreased manpower requirement, enhanced workforce efficiency, and efficient information exploitation.

  9. Heuristic and probabilistic wind power availability estimation procedures: Improved tools for technology and site selection

    Energy Technology Data Exchange (ETDEWEB)

    Nigim, K.A. [University of Waterloo, Waterloo, Ont. (Canada). Department of Electrical and Computer Engineering; Parker, Paul [University of Waterloo, Waterloo, Ont. (Canada). Department of Geography, Environmental Studies

    2007-04-15

    The paper describes two investigative procedures to estimate wind power from measured wind velocities. Wind velocity data are manipulated to visualize the site potential by investigating the probable wind power availability and its capacity to meet a targeted demand. The first procedure is an availability procedure that looks at the wind characteristics and its probable energy capturing profile. This profile of wind enables the probable maximum operating wind velocity profile for a selected wind turbine design to be predicted. The structured procedures allow for a consequent adjustment, sorting and grouping of the measured wind velocity data taken at different time intervals and hub heights. The second procedure is the adequacy procedure that investigates the probable degree of availability and the application consequences. Both procedures are programmed using MathCAD symbolic mathematical software. The math tool is used to generate a visual interpolation of the data as well as numerical results from extensive data sets that exceed the capacity of conventional spreadsheet tools. Two sites located in Southern Ontario, Canada are investigated using the procedures. Successful implementation of the procedures supports informed decision making where a hill site is shown to have much higher wind potential than that measured at the local airport. The process is suitable for a wide spectrum of users who are considering the energy potential for either a grid-tied or off-grid wind energy system. (author)

  10. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  11. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  12. On Realism of Architectural Procedural Models

    Czech Academy of Sciences Publication Activity Database

    Beneš, J.; Kelly, T.; Děchtěrenko, Filip; Křivánek, J.; Müller, P.

    2017-01-01

    Roč. 36, č. 2 (2017), s. 225-234 ISSN 0167-7055 Grant - others:AV ČR(CZ) StrategieAV21/14 Program:StrategieAV Institutional support: RVO:68081740 Keywords : realism * procedural modeling * architecture Subject RIV: IN - Informatics, Computer Science OBOR OECD: Cognitive sciences Impact factor: 1.611, year: 2016

  13. Preoperative testing and risk assessment: perspectives on patient selection in ambulatory anesthetic procedures

    Directory of Open Access Journals (Sweden)

    Stierer TL

    2015-08-01

    Full Text Available Tracey L Stierer,1,2 Nancy A Collop3,41Department of Anesthesiology, 2Department of Critical Care Medicine, Otolaryngology Head and Neck Surgery, Johns Hopkins Medicine, Baltimore, MD, USA; 3Department of Medicine, 4Department of Neurology, Emory University, Emory Sleep Center, Wesley Woods Center, Atlanta, GA, USAAbstract: With recent advances in surgical and anesthetic technique, there has been a growing emphasis on the delivery of care to patients undergoing ambulatory procedures of increasing complexity. Appropriate patient selection and meticulous preparation are vital to the provision of a safe, quality perioperative experience. It is not unusual for patients with complex medical histories and substantial systemic disease to be scheduled for discharge on the same day as their surgical procedure. The trend to “push the envelope” by triaging progressively sicker patients to ambulatory surgical facilities has resulted in a number of challenges for the anesthesia provider who will assume their care. It is well known that certain patient diseases are associated with increased perioperative risk. It is therefore important to define clinical factors that warrant more extensive testing of the patient and medical conditions that present a prohibitive risk for an adverse outcome. The preoperative assessment is an opportunity for the anesthesia provider to determine the status and stability of the patient’s health, provide preoperative education and instructions, and offer support and reassurance to the patient and the patient’s family members. Communication between the surgeon/proceduralist and the anesthesia provider is critical in achieving optimal outcome. A multifaceted approach is required when considering whether a specific patient will be best served having their procedure on an outpatient basis. Not only should the patient's comorbidities be stable and optimized, but details regarding the planned procedure and the resources available

  14. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  15. Communication and Procedural Models of the E-Commerce Systems

    Directory of Open Access Journals (Sweden)

    Petr SUCHÁNEK

    2009-06-01

    Full Text Available E-commerce systems became a standard interface between sellers (or suppliers and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model of communication between customers and sellers in connection with the customer feedback and procedural models of e-commerce systems in terms of e-shops. Procedural model was made with the aid of definition of SOA.

  16. Factors selection in landslide susceptibility modelling on large scale following the gis matrix method: application to the river Beiro basin (Spain

    Directory of Open Access Journals (Sweden)

    D. Costanzo

    2012-02-01

    Full Text Available A procedure to select the controlling factors connected to the slope instability has been defined. It allowed us to assess the landslide susceptibility in the Rio Beiro basin (about 10 km2 over the northeastern area of the city of Granada (Spain. Field and remote (Google EarthTM recognition techniques allowed us to generate a landslide inventory consisting in 127 phenomena. To discriminate between stable and unstable conditions, a diagnostic area had been chosen as the one limited to the crown and the toe of the scarp of the landslide. 15 controlling or determining factors have been defined considering topographic, geologic, geomorphologic and pedologic available data. Univariate tests, using both association coefficients and validation results of single-variable susceptibility models, allowed us to select the best predictors, which were combined for the unique conditions analysis. For each of the five recognised landslide typologies, susceptibility maps for the best models were prepared. In order to verify both the goodness of fit and the prediction skill of the susceptibility models, two different validation procedures were applied and compared. Both procedures are based on a random partition of the landslide archive for producing a test and a training subset. The first method is based on the analysis of the shape of the success and prediction rate curves, which are quantitatively analysed exploiting two morphometric indexes. The second method is based on the analysis of the degree of fit, by considering the relative error between the intersected target landslides by each of the different susceptibility classes in which the study area was partitioned. Both the validation procedures confirmed a very good predictive performance of the susceptibility models and of the actual procedure followed to select the controlling factors.

  17. Equation of State Selection for Organic Rankine Cycle Modeling Under Uncertainty

    DEFF Research Database (Denmark)

    Frutiger, Jerome; O'Connell, John; Abildskov, Jens

    In recent years there has been a great interest in the design and selection of working fluids for low-temperature Organic Rankine Cycles (ORC), to efficiently produce electrical power from waste heat from chemical engineering applications, as well as from renewable energy sources such as biomass...... cycle, all influence the model output uncertainty. The procedure is highlighted for an ORC for with a low-temperature heat source from exhaust gas from a marine diesel engine.[1] Saleh B, Koglbauer G, Wendland M, Fischer J. Working fluids for lowtemperature organic Rankine cycles. Energy 2007...

  18. Wind scatterometry with improved ambiguity selection and rain modeling

    Science.gov (United States)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous

  19. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  20. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    Science.gov (United States)

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori

  1. Parent-Implemented Procedural Modification of Escape Extinction in the Treatment of Food Selectivity in a Young Child with Autism

    Science.gov (United States)

    Tarbox, Jonathan; Schiff, Averil; Najdowski, Adel C.

    2010-01-01

    Fool selectivity is characterized by the consumption of an inadequate variety of foods. The effectiveness of behavioral treatment procedures, particularly nonremoval of the spoon, is well validated by research. The role of parents in the treatment of feeding disorders and the feasibility of behavioral procedures for parent implementation in the…

  2. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd); Das Verfahren der alternativen Standortsuche im Bericht des Arbeitskreises Auswahlverfahren Endlagerstandorte (AKEnd)

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, M. [Jena Univ. (Germany)

    2005-07-01

    The paper discusses the results of the report of the study group on the radioactive waste final repository selection process with respect to the alternative site selection procedure. Key points of the report are the long-term safety, the alternativity of sites and the concept of one repository. The critique on this report is focussed on the topics site selection and licensing procedures, civil participation, the factor time and the question of cost.

  3. A model to determine payments associated with radiology procedures.

    Science.gov (United States)

    Mabotuwana, Thusitha; Hall, Christopher S; Thomas, Shiby; Wald, Christoph

    2017-12-01

    Across the United States, there is a growing number of patients in Accountable Care Organizations and under risk contracts with commercial insurance. This is due to proliferation of new value-based payment models and care delivery reform efforts. In this context, the business model of radiology within a hospital or health system context is shifting from a primary profit-center to a cost-center with a goal of cost savings. Radiology departments need to increasingly understand how the transactional nature of the business relates to financial rewards. The main challenge with current reporting systems is that the information is presented only at an aggregated level, and often not broken down further, for instance, by type of exam. As such, the primary objective of this research is to provide better visibility into payments associated with individual radiology procedures in order to better calibrate expense/capital structure of the imaging enterprise to the actual revenue or value-add to the organization it belongs to. We propose a methodology that can be used to determine technical payments at a procedure level. We use a proportion based model to allocate payments to individual radiology procedures based on total charges (which also includes non-radiology related charges). Using a production dataset containing 424,250 radiology exams we calculated the overall average technical charge for Radiology to be $873.08 per procedure and the corresponding average payment to be $326.43 (range: $48.27 for XR and $2750.11 for PET/CT) resulting in an average payment percentage of 37.39% across all exams. We describe how charges associated with a procedure can be used to approximate technical payments at a more granular level with a focus on Radiology. The methodology is generalizable to approximate payment for other services as well. Understanding payments associated with each procedure can be useful during strategic practice planning. Charge-to-total charge ratio can be used to

  4. Expectancy bias in a selective conditioning procedure: trait anxiety increases the threat value of a blocked stimulus.

    Science.gov (United States)

    Boddez, Yannick; Vervliet, Bram; Baeyens, Frank; Lauwers, Stephanie; Hermans, Dirk; Beckers, Tom

    2012-06-01

    In a blocking procedure, a single conditioned stimulus (CS) is paired with an unconditioned stimulus (US), such as electric shock, in the first stage. During the subsequent stage, the CS is presented together with a second CS and this compound is followed by the same US. Fear conditioning studies in non-human animals have demonstrated that fear responding to the added second CS typically remains low, despite its being paired with the US. Accordingly, the blocking procedure is well suited as a laboratory model for studying (deficits in) selective threat appraisal. The present study tested the relation between trait anxiety and blocking in human aversive conditioning. Healthy participants filled in a trait anxiety questionnaire and underwent blocking treatment in the human aversive conditioning paradigm. Threat appraisal was measured through shock expectancy ratings and skin conductance. As hypothesized, trait anxiety was positively associated with shock expectancy ratings to the blocked stimulus. In skin conductance responding, no significant effects of stimulus type could be detected during blocking training or testing. The current study does not allow strong claims to be made regarding the theoretical process underlying the expectancy bias we observed. The observed shock expectancy bias might be one of the mechanisms leading to non-specific fear in individuals at risk for developing anxiety disorders. A deficit in blocking, or a deficit in selective threat appraisal at the more general level, indeed results in fear becoming non-specific and disconnected from the most likely causes or predictors of danger. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    Science.gov (United States)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  6. Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.

    Science.gov (United States)

    Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David

    2012-01-01

    Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  7. Office-Based Deep Sedation for Pediatric Ophthalmologic Procedures Using a Sedation Service Model

    Directory of Open Access Journals (Sweden)

    Kirk Lalwani

    2012-01-01

    Full Text Available Aims. (1 To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2 To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62–100. There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  8. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  9. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  10. Procedural Content Graphs for Urban Modeling

    Directory of Open Access Journals (Sweden)

    Pedro Brandão Silva

    2015-01-01

    Full Text Available Massive procedural content creation, for example, for virtual urban environments, is a difficult, yet important challenge. While shape grammars are a popular example of effectiveness in architectural modeling, they have clear limitations regarding readability, manageability, and expressive power when addressing a variety of complex structural designs. Moreover, shape grammars aim at geometry specification and do not facilitate integration with other types of content, such as textures or light sources, which could rather accompany the generation process. We present procedural content graphs, a graph-based solution for procedural generation that addresses all these issues in a visual, flexible, and more expressive manner. Besides integrating handling of diverse types of content, this approach introduces collective entity manipulation as lists, seamlessly providing features such as advanced filtering, grouping, merging, ordering, and aggregation, essentially unavailable in shape grammars. Hereby, separated entities can be easily merged or just analyzed together in order to perform a variety of context-based decisions and operations. The advantages of this approach are illustrated via examples of tasks that are either very cumbersome or simply impossible to express with previous grammar approaches.

  11. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  12. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  13. New non-cognitive procedures for medical applicant selection: a qualitative analysis in one school.

    Science.gov (United States)

    Katz, Sara; Vinker, Shlomo

    2014-11-07

    Recent data have called into question the reliability and predictive validity of standard admission procedures to medical schools. Eliciting non-cognitive attributes of medical school applicants using qualitative tools and methods has thus become a major challenge. 299 applicants aged 18-25 formed the research group. A set of six research tools was developed in addition to the two existing ones. These included: a portfolio task, an intuitive task, a cognitive task, a personal task, an open self-efficacy questionnaire and field-notes. The criteria-based methodology design used constant comparative analysis and grounded theory techniques to produce a personal attributes profile per participant, scored on a 5-point scale holistic rubric. Qualitative validity of data gathering was checked by comparing the profiles elicited from the existing interview against the profiles elicited from the other tools, and by comparing two profiles of each of the applicants who handed in two portfolio tasks. Qualitative validity of data analysis was checked by comparing researcher results with those of an external rater (n =10). Differences between aggregated profile groups were checked by the Npar Wilcoxon Signed Ranks Test and by Spearman Rank Order Correlation Test. All subjects gave written informed consent to their participation. Privacy was protected by using code numbers. A concept map of 12 personal attributes emerged, the core constructs of which were motivation, sociability and cognition. A personal profile was elicited. Inter-rater agreement was 83.3%. Differences between groups by aggregated profiles were found significant (p < .05, p < .01, p < .001).A random sample of sixth year students (n = 12) underwent the same admission procedure as the research group. Rank order was different; and arrogance was a new construct elicited in the sixth year group. This study suggests a broadening of the methodology for selecting medical school applicants. This methodology

  14. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  15. Sensitivity and uncertainty studies of the CRAC2 code for selected meteorological models and parameters

    International Nuclear Information System (INIS)

    Ward, R.C.; Kocher, D.C.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.

    1985-01-01

    We have studied the sensitivity of results from the CRAC2 computer code, which predicts health impacts from a reactor-accident scenario, to uncertainties in selected meteorological models and parameters. The sources of uncertainty examined include the models for plume rise and wet deposition and the meteorological bin-sampling procedure. An alternative plume-rise model usually had little effect on predicted health impacts. In an alternative wet-deposition model, the scavenging rate depends only on storm type, rather than on rainfall rate and atmospheric stability class as in the CRAC2 model. Use of the alternative wet-deposition model in meteorological bin-sampling runs decreased predicted mean early injuries by as much as a factor of 2-3 and, for large release heights and sensible heat rates, decreased mean early fatalities by nearly an order of magnitude. The bin-sampling procedure in CRAC2 was expanded by dividing each rain bin into four bins that depend on rainfall rate. Use of the modified bin structure in conjunction with the CRAC2 wet-deposition model changed all predicted health impacts by less than a factor of 2. 9 references

  16. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  17. PROCRU: A model for analyzing crew procedures in approach to landing

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  18. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  19. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  20. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  1. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  2. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  3. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    International Nuclear Information System (INIS)

    Tencate, Alister J.; Kalivas, John H.; White, Alexander J.

    2016-01-01

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  4. [The emphases and basic procedures of genetic counseling in psychotherapeutic model].

    Science.gov (United States)

    Zhang, Yuan-Zhi; Zhong, Nanbert

    2006-11-01

    The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.

  5. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  6. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  7. A RENORMALIZATION PROCEDURE FOR TENSOR MODELS AND SCALAR-TENSOR THEORIES OF GRAVITY

    OpenAIRE

    SASAKURA, NAOKI

    2010-01-01

    Tensor models are more-index generalizations of the so-called matrix models, and provide models of quantum gravity with the idea that spaces and general relativity are emergent phenomena. In this paper, a renormalization procedure for the tensor models whose dynamical variable is a totally symmetric real three-tensor is discussed. It is proven that configurations with certain Gaussian forms are the attractors of the three-tensor under the renormalization procedure. Since these Gaussian config...

  8. Customer Order Decoupling Point Selection Model in Mass Customization Based on MAS

    Institute of Scientific and Technical Information of China (English)

    XU Xuanguo; LI Xiangyang

    2006-01-01

    Mass customization relates to the ability of providing individually designed products or services to customer with high process flexibility or integration. Literatures on mass customization have been focused on mechanism of MC, but little on customer order decoupling point selection. The aim of this paper is to present a model for customer order decoupling point selection of domain knowledge interactions between enterprises and customers in mass customization. Based on the analysis of other researchers' achievements combining the demand problems of customer and enterprise, a model of group decision for customer order decoupling point selection is constructed based on quality function deployment and multi-agent system. Considering relatively the decision makers of independent functional departments as independent decision agents, a decision agent set is added as the third dimensionality to house of quality, the cubic quality function deployment is formed. The decision-making can be consisted of two procedures: the first one is to build each plane house of quality in various functional departments to express each opinions; the other is to evaluate and gather the foregoing sub-decisions by a new plane quality function deployment. Thus, department decision-making can well use its domain knowledge by ontology, and total decision-making can keep simple by avoiding too many customer requirements.

  9. Expert and non-expert groups perception of LILW repository site selection procedure

    International Nuclear Information System (INIS)

    Zeleznik, N.; Polic, M.

    2001-01-01

    Slovenia is now in the process of the site selection for a low and intermediate level radioactive waste (LILW) repository. Earlier searches for the LILW repository site confronted the Agency for radwaste management (ARAO) with a number of problems, mainly concerning the contacts with the local communities and their willingness to accept the repository. Therefore the Agency started with a new, so-called mixed mode approach to the site selection, where the special role of a mediator is introduced. The mediator represents the link between the investor and the local community, and facilitates the communication and negotiations between both. In this study we try to find out how people perceive the mediating process and conditions under which the LILW repository would be accepted in the local community. Therefore a special survey was conducted. The results showed some of the conditions under which participants would possibly accept the LILW repository. Differences in the perception between non-expert and expert groups were demonstrated and analysed, especially in the assessment of the consequences of LILW repository construction on the environment. Also the socio-psychological influences of the LILW repository were noted and examined. Consequences and recommendations for future work on the site selection procedure were prepared on the basis of the research results.(author)

  10. Developing Characterization Procedures for Qualifying both Novel Selective Laser Sintering Polymer Powders and Recycled Powders

    Energy Technology Data Exchange (ETDEWEB)

    Bajric, Sendin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-12

    Selective laser sintering (SLS) is an additive technique which is showing great promise over conventional manufacturing techniques. SLS requires certain key material properties for a polymer powder to be successfully processed into an end-use part, and therefore limited selection of materials are available. Furthermore, there has been evidence of a powder’s quality deteriorating following each SLS processing cycle. The current investigation serves to build a path forward in identifying new SLS powder materials by developing characterization procedures for identifying key material properties as well as for detecting changes in a powder’s quality. Thermogravimetric analyses, differential scanning calorimetry, and bulk density measurements were investigated.

  11. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  12. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  13. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  14. Role of maturity timing in selection procedures and in the specialisation of playing positions in youth basketball

    NARCIS (Netherlands)

    te Wierike, Sanne Cornelia Maria; Elferink-Gemser, Marije Titia; Tromp, Eveline Jenny Yvonne; Vaeyens, Roel; Visscher, Chris

    2015-01-01

    This study investigated the role of maturity timing in selection procedures and in the specialisation of playing positions in youth male basketball. Forty-three talented Dutch players (14.66 +/- 1.09years) participated in this study. Maturity timing (age at peak height velocity), anthropometric,

  15. Selecting a Risk-Based SQC Procedure for a HbA1c Total QC Plan.

    Science.gov (United States)

    Westgard, Sten A; Bayat, Hassan; Westgard, James O

    2017-09-01

    Recent US practice guidelines and laboratory regulations for quality control (QC) emphasize the development of QC plans and the application of risk management principles. The US Clinical Laboratory Improvement Amendments (CLIA) now includes an option to comply with QC regulations by developing an individualized QC plan (IQCP) based on a risk assessment of the total testing process. The Clinical and Laboratory Standards Institute (CLSI) has provided new practice guidelines for application of risk management to QC plans and statistical QC (SQC). We describe an alternative approach for developing a total QC plan (TQCP) that includes a risk-based SQC procedure. CLIA compliance is maintained by analyzing at least 2 levels of controls per day. A Sigma-Metric SQC Run Size nomogram provides a graphical tool to simplify the selection of risk-based SQC procedures. Current HbA1c method performance, as demonstrated by published method validation studies, is estimated to be 4-Sigma quality at best. Optimal SQC strategies require more QC than the CLIA minimum requirement of 2 levels per day. More complex control algorithms, more control measurements, and a bracketed mode of operation are needed to assure the intended quality of results. A total QC plan with a risk-based SQC procedure provides a simpler alternative to an individualized QC plan. A Sigma-Metric SQC Run Size nomogram provides a practical tool for selecting appropriate control rules, numbers of control measurements, and run size (or frequency of SQC). Applications demonstrate the need for continued improvement of analytical performance of HbA1c laboratory methods.

  16. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  17. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  18. The applicability of fair selection models in the South African context

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    1995-06-01

    Full Text Available This article reviews several models that are aimed at achieving fair selection in situations in which underrepresented groups tend to obtain lower scores on selection tests. Whereas predictive bias is a statistical concept that refers to systematic errors in the prediction of individuals' criterion scores, selection fairness pertains to the extent to which selection results meet certain socio-political demands. The regression and equal-risk models adjust for differences in the criterion-on-test regression lines of different groups. The constant ratio, conditional probability and equal probability models manipulate the test cutoff scores of different groups so that certain ratios formed between different selection outcomes (correct acceptances, correct rejections, incorrect acceptances, incorrect rejections are the same for such groups. The decision-theoretic approach requires that utilities be attached to these different outcomes for different groups. These procedures are not only eminently suited to accommodate calls for affirmative action, but they also serve the cause of transparency. Opsomming Hierdie artikel verskaf 'n oorsig van verskeie modelle om billike keuring te verkry in situasies waar onderverteen-woordigende groepe geneig is om swakker op keuringstoetse te vaar. Terwyl voorspellingsydigheid 'n statistiese begrip is wat betrekking het op stelselmatige foute in die voorspelling van individue se kriteriumtellings, het keuringsbillikheid te make met die mate waarin keuringsresultate aan sekere sosiaal-politieke vereistes voldoen. Die regressieen gelyke-risiko-modelle maak aanpassings vir verskille in die kriterium-op-toetsregressielyne van verskillende groepe. Die konstante-verhoudings, voorwaardelike-waarskynlikheids- en gelyke-waarskynlikheidsmodelle manipuleer die toetsafkappunte van verskillende groepe sodat sekere verhoudings wat tussen keuringsresultate (korrekte aanvaardings, verkeerde aanvaardings, korrekte verwerpings

  19. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  20. CHAIN-WISE GENERALIZATION OF ROAD NETWORKS USING MODEL SELECTION

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2017-05-01

    Full Text Available Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  1. Addressing selected problems of the modelling of digital control systems

    International Nuclear Information System (INIS)

    Sedlak, J.

    2004-12-01

    The introduction of digital systems to practical activities at nuclear power plants brings about new requirements for their modelling for the purposes of reliability analyses required for plant licensing as well as for inclusion into PSA studies and subsequent use in applications for the assessment of events, limits and conditions, and risk monitoring. It is very important to assess, both qualitatively and quantitatively, the effect of this change on operational safety. The report describes selected specific features of reliability analysis of digital system and recommends methodological procedures. The chapters of the report are as follows: (1) Flexibility and multifunctionality of the system. (2) General framework of reliability analyses (Understanding the system; Qualitative analysis; Quantitative analysis; Assessment of results, comparison against criteria; Documenting system reliability analyses; Asking for comments and their evaluation); and (3) Suitable reliability models (Reliability models of basic events; Monitored components with repair immediately following defect or failure; Periodically tested components; Constant unavailability (probability of failure to demand); Application of reliability models for electronic components; Example of failure rate decomposition; Example modified for diagnosis successfulness; Transfer of reliability analyses to PSA; Common cause failures - CCF; Software backup and CCF type failures, software versus hardware). (P.A.)

  2. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  3. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  4. A power set-based statistical selection procedure to locate susceptible rare variants associated with complex traits with sequencing data.

    Science.gov (United States)

    Sun, Hokeun; Wang, Shuang

    2014-08-15

    Existing association methods for rare variants from sequencing data have focused on aggregating variants in a gene or a genetic region because of the fact that analysing individual rare variants is underpowered. However, these existing rare variant detection methods are not able to identify which rare variants in a gene or a genetic region of all variants are associated with the complex diseases or traits. Once phenotypic associations of a gene or a genetic region are identified, the natural next step in the association study with sequencing data is to locate the susceptible rare variants within the gene or the genetic region. In this article, we propose a power set-based statistical selection procedure that is able to identify the locations of the potentially susceptible rare variants within a disease-related gene or a genetic region. The selection performance of the proposed selection procedure was evaluated through simulation studies, where we demonstrated the feasibility and superior power over several comparable existing methods. In particular, the proposed method is able to handle the mixed effects when both risk and protective variants are present in a gene or a genetic region. The proposed selection procedure was also applied to the sequence data on the ANGPTL gene family from the Dallas Heart Study to identify potentially susceptible rare variants within the trait-related genes. An R package 'rvsel' can be downloaded from http://www.columbia.edu/∼sw2206/ and http://statsun.pusan.ac.kr. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Chaotic Dynamical State Variables Selection Procedure Based Image Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Zia Bashir

    2017-12-01

    Full Text Available Nowadays, in the modern digital era, the use of computer technologies such as smartphones, tablets and the Internet, as well as the enormous quantity of confidential information being converted into digital form have resulted in raised security issues. This, in turn, has led to rapid developments in cryptography, due to the imminent need for system security. Low-dimensional chaotic systems have low complexity and key space, yet they achieve high encryption speed. An image encryption scheme is proposed that, without compromising the security, uses reasonable resources. We introduced a chaotic dynamic state variables selection procedure (CDSVSP to use all state variables of a hyper-chaotic four-dimensional dynamical system. As a result, less iterations of the dynamical system are required, and resources are saved, thus making the algorithm fast and suitable for practical use. The simulation results of security and other miscellaneous tests demonstrate that the suggested algorithm excels at robustness, security and high speed encryption.

  6. 28 CFR 30.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 30.6 Section 30.6 Judicial Administration DEPARTMENT OF... consult with local elected officials. (b) Each state that adopts a process shall notify the Attorney...

  7. 49 CFR 17.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures apply to the selection of programs and activities under these regulations? 17.6 Section 17.6 Transportation Office of the Secretary of Transportation INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF TRANSPORTATION PROGRAMS AND ACTIVITIES § 17.6 What...

  8. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  9. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  10. Selective Intra-procedural AAA sac Embolization During EVAR Reduces the Rate of Type II Endoleak.

    Science.gov (United States)

    Mascoli, C; Freyrie, A; Gargiulo, M; Gallitto, E; Pini, R; Faggioli, G; Serra, C; De Molo, C; Stella, A

    2016-05-01

    The pre-treatment presence of at least six efferent patent vessels (EPV) from the AAA sac and/or AAA thrombus volume ratio (VR%) AAA sac embolization (Group A, 2012-2013) were retrospectively selected and compared with a control group of patients with the same p-MRF, who underwent EVAR without intra-procedural sac embolization (Group B, 2008-2010). The presence of ELIIp was evaluated by duplex ultrasound at 0 and 6 months, and by contrast enhanced ultrasound at 12 months. The association between AAA diameter, age, COPD, smoking, anticoagulant therapy, and AAA sac embolization with ELIIp was evaluated using multiple logistic regression. The primary endpoint was the effectiveness of the intra-procedural AAA sac embolization for ELIIp prevention. Secondary endpoints were AAA sac evolution and freedom from ELIIp and embolization related re-interventions at 6-12 months. Seventy patients were analyzed: 26 Group A and 44 Group B; the groups were homogeneous for clinical/morphological characteristics. In Group A the median number of coils positioned in AAA sac was 4.1 (IQR 1). There were no complications related to the embolization procedures. A significantly lower number of ELIIp was detected in Group A than in Group B (8/26 vs. 33/44, respectively, p AAA sac embolization was the only factor independently associated with freedom from ELIIp at 6 (OR 0.196, 95% CI 0.06-0.63; p = .007) and 12 months (OR 0.098, 95% CI 0.02-0.35; p AAA sac diameter shrinkage were detected between the two groups at 6-12 months (p = .42 and p = .58, respectively). Freedom from ELIIp related and embolization related re-interventions was 100% in both groups, at 6 and 12 months. Selective intra-procedural AAA sac embolization in patients with p-MRF is safe and could be an effective method to reduce ELIIp. Further studies are mandatory to support these results at long-term follow up. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  11. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  12. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  13. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we impl...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed.......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  14. 49 CFR 542.1 - Procedures for selecting new light duty truck lines that are likely to have high or low theft rates.

    Science.gov (United States)

    2010-10-01

    ... lines that are likely to have high or low theft rates. 542.1 Section 542.1 Transportation Other... OF TRANSPORTATION PROCEDURES FOR SELECTING LIGHT DUTY TRUCK LINES TO BE COVERED BY THE THEFT... or low theft rates. (a) Scope. This section sets forth the procedures for motor vehicle manufacturers...

  15. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  16. A smart Monte Carlo procedure for production costing and uncertainty analysis

    International Nuclear Information System (INIS)

    Parker, C.; Stremel, J.

    1996-01-01

    Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge of the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined

  17. The plant operating procedure information modeling system for creation and maintenance of procedures

    International Nuclear Information System (INIS)

    Fanto, S.V.; Petras, D.S.; Reiner, R.T.; Frost, D.R.; Orendi, R.G.

    1990-01-01

    This paper reports that as a result of the accident at Three Mile Island, regulatory requirements were issued to upgrade Emergency Operating Procedures for nuclear power plants. The use of human-factored, function-oriented, EOPs were mandated to improve human reliability and to mitigate the consequences of a broad range of initiating events, subsequent failures and operator errors, without having to first diagnose the specific events. The Westinghouse Owners Group responded by developing the Emergency Response Guidelines in a human-factored, two-column format to aid in the transfer of the improved technical information to the operator during transients and accidents. The ERGs are a network of 43 interrelated guidelines which specify operator actions to be taken during plant emergencies to restore the plant to a safe and stable condition. Each utility then translates these guidelines into plant specific EOPs. The creation and maintenance of this large web of interconnecting ERGs/EOPs is an extremely complex task. This paper reports that in order to aid procedure documentation specialists with this time-consuming and tedious task, the Plant Operating Procedure Information Modeling system was developed to provide a controlled and consistent means to build and maintain the ERGs/EOPs and their supporting documentation

  18. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  19. Partner Selection in a Virtual Enterprise: A Group Multiattribute Decision Model with Weighted Possibilistic Mean Values

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2013-01-01

    Full Text Available This paper proposes an extended technique for order preference by similarity to ideal solution (TOPSIS for partner selection in a virtual enterprise (VE. The imprecise and fuzzy information of the partner candidate and the risk preferences of decision makers are both considered in the group multiattribute decision-making model. The weighted possibilistic mean values are used to handle triangular fuzzy numbers in the fuzzy environment. A ranking procedure for partner candidates is developed to help decision makers with varying risk preferences select the most suitable partners. Numerical examples are presented to reflect the feasibility and efficiency of the proposed TOPSIS. Results show that the varying risk preferences of decision makers play a significant role in the partner selection process in VE under a fuzzy environment.

  20. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  1. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing; Elbaz, Ayman M.; Roberts, William L.; Im, Hong G.

    2016-01-01

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing

  2. THE DEVELOPMENT OF A NOVEL MODEL FOR MINING METHOD SELECTION IN A FUZZY ENVIRONMENT; CASE STUDY: TAZAREH COAL MINE, SEMNAN PROVINCE, IRAN

    Directory of Open Access Journals (Sweden)

    Fatemeh Asadi Ooriad

    2018-01-01

    Full Text Available Mining method selection (MMS for mineral resources is one of the most significant steps in mining production management. Due to high costs involved and environmental problems, it is usually not possible to change the coal mining method after planning and starting the operation. In most cases, MMS can be considered as an irreversible process. Selecting a method for mining mainly depends on geological, geometrical properties of the resource, environmental impacts of exploration, impacts of hazardous activities and land use management. This paper seeks to develop a novel model for mining method selection in order to achieve a stable production rate and to reduce environmental problems. This novel model is illustrated by implementing for Tazareh coal mine. Given the disadvantages of the previous models for selecting coal mining method, the purpose of this research is modifying the previous models and offering a comprehensive model. In this respect, TOPSIS method is used as a powerful multi attribute decision-making procedure in Fuzzy environment. After implementation of the presented model in Tazareh coal mine, long wall mining method has been selected as the most appropriate mining method.

  3. A sequential extraction procedure to determine Ra and U isotopes by alpha-particle spectrometry in selective leachates

    International Nuclear Information System (INIS)

    Aguado, J.L.; Bolivar, J.P.; San-Miguel, E.G.; Garcia-Tenorio, R.

    2003-01-01

    A radiochemical sequential extraction procedure has been developed in our laboratory to determine 226 Ra and 234,238 U by alpha spectrometry in environmental samples. This method has been validated for both radionuclides by comparing in selected samples the values obtained through its application with the results obtained by applying alternative procedures. Recoveries obtained, counting periods applied and background levels found in the alpha spectra give suitable detection limits to allow the Ra and U determination in operational forms defined in riverbed contaminated sediments. Results obtained in these speciation studies show that 226 Ra and 234,238 U contamination tend to be associated to precipitated forms of the sediments. (author)

  4. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models.

    Science.gov (United States)

    Liu, Jun; Dong, Junyu; Cai, Xiaoxu; Qi, Lin; Chantler, Mike

    2015-01-01

    Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA) and Singular Value Decomposition (SVD) to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap) to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.

  5. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models.

    Directory of Open Access Journals (Sweden)

    Jun Liu

    Full Text Available Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA and Singular Value Decomposition (SVD to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.

  6. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  7. Path analysis and multi-criteria decision making: an approach for multivariate model selection and analysis in health.

    Science.gov (United States)

    Vasconcelos, A G; Almeida, R M; Nobre, F F

    2001-08-01

    This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

  8. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  9. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  10. Interactive Procedural Modelling of Coherent Waterfall Scenes

    OpenAIRE

    Emilien , Arnaud; Poulin , Pierre; Cani , Marie-Paule; Vimont , Ulysse

    2015-01-01

    International audience; Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of...

  11. The Use of a Probit Model for the Validation of Selection Procedures.

    Science.gov (United States)

    Dagenais, Denyse L.

    1984-01-01

    After a review of the disadvantages of linear models for estimating the probability of academic success from previous school records and admission test results, the use of a probit model is proposed. The model is illustrated with admissions data from the Ecole des Hautes Etudes Commerciales in Montreal. (Author/BW)

  12. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  13. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  14. SNP calling using genotype model selection on high-throughput sequencing data

    KAUST Repository

    You, Na

    2012-01-16

    Motivation: A review of the available single nucleotide polymorphism (SNP) calling procedures for Illumina high-throughput sequencing (HTS) platform data reveals that most rely mainly on base-calling and mapping qualities as sources of error when calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts for the errors that occur during the preparation of the genomic sample. Simulations and real data analyses indicate that GeMS has the best performance balance of sensitivity and positive predictive value among the tested SNP callers. © The Author 2012. Published by Oxford University Press. All rights reserved.

  15. Selective versus routine patch metal allergy testing to select bar material for the Nuss procedure in 932 patients over 10years.

    Science.gov (United States)

    Obermeyer, Robert J; Gaffar, Sheema; Kelly, Robert E; Kuhn, M Ann; Frantz, Frazier W; McGuire, Margaret M; Paulson, James F; Kelly, Cynthia S

    2018-02-01

    The aim of the study was to determine the role of patch metal allergy testing to select bar material for the Nuss procedure. An IRB-approved (11-04-WC-0098) single institution retrospective, cohort study comparing selective versus routine patch metal allergy testing to select stainless steel or titanium bars for Nuss repair was performed. In Cohort A (9/2004-1/2011), selective patch testing was performed based on clinical risk factors. In Cohort B (2/2011-9/2014), all patients were patch tested. The cohorts were compared for incidence of bar allergy and resultant premature bar loss. Risk factors for stainless steel allergy or positive patch test were evaluated. Cohort A had 628 patients with 63 (10.0%) selected for patch testing, while all 304 patients in Cohort B were tested. Over 10years, 15 (1.8%) of the 842 stainless steel Nuss repairs resulted in a bar allergy, and 5 had a negative preoperative patch test. The incidence of stainless steel bar allergy (1.8% vs 1.7%, p=0.57) and resultant bar loss (0.5% vs 1.3%, p=0.23) was not statistically different between cohorts. An allergic reaction to a stainless steel bar or a positive patch test was more common in females (OR=2.3, pbar allergies occur at a low incidence with either routine or selective patch metal allergy testing. If selective testing is performed, it is advisable in females and patients with a personal or family history of metal sensitivity. A negative preoperative patch metal allergy test does not preclude the possibility of a postoperative stainless steel bar allergy. Level III Treatment Study and Study of Diagnostic Test. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  17. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  18. The Use of a Fresh-Tissue Cadaver Model for the Instruction of Dermatological Procedures: A Laboratory Study for Training Medical Students.

    Science.gov (United States)

    Cervantes, Jose A; Costello, Collin M; Maarouf, Melody; McCrary, Hilary C; Zeitouni, Nathalie C

    2017-09-01

    A realistic model for the instruction of basic dermatologic procedural skills was developed, while simultaneously increasing medical student exposure to the field of dermatology. The primary purpose of the authors' study was to evaluate the utilization of a fresh-tissue cadaver model (FTCM) as a method for the instruction of common dermatologic procedures. The authors' secondary aim was to assess students' perceived clinical skills and overall perception of the field of dermatology after the lab. Nineteen first- and second-year medical students were pre- and post-tested on their ability to perform punch and excisional biopsies on a fresh-tissue cadaver. Students were then surveyed on their experience. Assessment of the cognitive knowledge gain and technical skills revealed a statistically significant improvement in all categories (p < .001). An analysis of the survey demonstrated that 78.9% were more interested in selecting dermatology as a career and 63.2% of participants were more likely to refer their future patients to a Mohs surgeon. An FTCM is a viable method for the instruction and training of dermatologic procedures. In addition, the authors conclude that an FTCM provides realistic instruction for common dermatologic procedures and enhances medical students' early exposure and interest in the field of dermatology.

  19. Safety analysis procedures for PHWR

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4

  20. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  1. A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2003-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported b...

  2. Modelling the Determinants of Winning in Public Tendering Procedures Based on the Activity of a Selected Company

    Directory of Open Access Journals (Sweden)

    Maciej Malara

    2012-01-01

    Full Text Available The purpose of this article is to identify the factors influencing the probability of winning in public procurement procedures and to assess the strength of their impact from the perspective of both: the bidder and procurer. The research was conducted with the use of series of quantitative methods: binary logistic regression, discriminant analysis and cluster analysis. It was based on a sample consisting of public tenders, in which the examined company performed the role of a bidder. Thus, the research process was aimed at both identifying the factors of success and estimating the probability of achieving it, where it was possible to obtain probabilities. The main idea of this research is to answer questions about the utility of various methods of quantitative analysis in the case of analyzing determinants of success. Results of the research are presented in the following sequence of sections: characteristics of the examined material, the process of modelling the probability of winning, evaluation of the quality of the results obtained. (original abstract

  3. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  4. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...

  5. Cost-benefit analysis model: A tool for area-wide fruit fly management. Procedures manual

    International Nuclear Information System (INIS)

    Enkerlin, W.; Mumford, J.; Leach, A.

    2007-03-01

    The Generic Fruit Fly Cost-Benefit Analysis Model assists in economic decision making associated with area-wide fruit fly control options. The FRUIT FLY COST-BENEFIT ANALYSIS PROGRAM (available on 1 CD-ROM from the Joint FAO/IAEA Programme of Nuclear Techniques in Food and Agriculture) is an Excel 2000 Windows based program, for which all standard Windows and Excel conventions apply. The Model is user friendly and thus largely self-explanatory. Nevertheless, it includes a procedures manual that has been prepared to guide the user, and thus should be used together with the software. Please note that the table presenting the pest management options in the Introductory Page of the model is controlled by spin buttons and click boxes. These controls are linked to macros that hide non relevant tables and boxes. N.B. it is important that the medium level of security is selected from the Tools menu of Excel, to do this go to Tools|Macros|Security| and select Medium. When the file is opened a form will appear containing three buttons, click on the middle button, 'Enable Macros', so that the macros may be used. Ideally the model should be used as a support tool by working groups aiming at assessing the economic returns of different fruit fly control options (suppression, eradication, containment and prevention). The working group should include professionals in agriculture with experience in area-wide implementation of integrated pest management programmes, an economist or at least someone with basic knowledge in economics, and if relevant, an entomologist with some background in the application of the sterile insect technique (SIT)

  6. The photon identification loophole in EPRB experiments: computer models with single-wing selection

    Science.gov (United States)

    De Raedt, Hans; Michielsen, Kristel; Hess, Karl

    2017-11-01

    Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015)] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other "post-selection" is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell's theorem which states that this is impossible. The failure of Bell's theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.

  7. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    Science.gov (United States)

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  8. Are the results of questionnaires measuring non-cognitive characteristics during the selection procedure for medical school application biased by social desirability?

    Science.gov (United States)

    Obst, Katrin U; Brüheim, Linda; Westermann, Jürgen; Katalinic, Alexander; Kötter, Thomas

    2016-01-01

    Introduction: A stronger consideration of non-cognitive characteristics in Medical School application procedures is desirable. Psychometric tests could be used as an economic supplement to face-to-face interviews which are frequently conducted during university internal procedures for Medical School applications (AdH, Auswahlverfahren der Hochschulen). This study investigates whether the results of psychometric questionnaires measuring non-cognitive characteristics such as personality traits, empathy, and resilience towards stress are vulnerable to distortions of social desirability when used in the context of selection procedures at Medical Schools. Methods: This study took place during the AdH of Lübeck University in August 2015. The following questionnaires have been included: NEO-FFI, SPF, and AVEM. In a 2x1 between-subject experiment we compared the answers from an alleged application condition and a control condition. In the alleged application condition we told applicants that these questionnaires were part of the application procedure. In the control condition applicants were informed about the study prior to completing the questionnaires. Results: All included questionnaires showed differences which can be regarded as social-desirability effects. These differences did not affect the entire scales but, rather, single subscales. Conclusion: These results challenge the informative value of these questionnaires when used for Medical School application procedures. Future studies may investigate the extent to which the differences influence the actual selection of applicants and what implications can be drawn from them for the use of psychometric questionnaires as part of study-place allocation procedures at Medical Schools.

  9. Are the results of questionnaires measuring non-cognitive characteristics during the selection procedure for medical school application biased by social desirability?

    Directory of Open Access Journals (Sweden)

    Obst, Katrin U.

    2016-11-01

    Full Text Available Introduction: A stronger consideration of non-cognitive characteristics in Medical School application procedures is desirable. Psychometric tests could be used as an economic supplement to face-to-face interviews which are frequently conducted during university internal procedures for Medical School applications (AdH, Auswahlverfahren der Hochschulen. This study investigates whether the results of psychometric questionnaires measuring non-cognitive characteristics such as personality traits, empathy, and resilience towards stress are vulnerable to distortions of social desirability when used in the context of selection procedures at Medical Schools.Methods: This study took place during the AdH of Lübeck University in August 2015. The following questionnaires have been included: NEO-FFI, SPF, and AVEM. In a 2x1 between-subject experiment we compared the answers from an alleged application condition and a control condition. In the alleged application condition we told applicants that these questionnaires were part of the application procedure. In the control condition applicants were informed about the study prior to completing the questionnaires.Results: All included questionnaires showed differences which can be regarded as social-desirability effects. These differences did not affect the entire scales but, rather, single subscales.Conclusion: These results challenge the informative value of these questionnaires when used for Medical School application procedures. Future studies may investigate the extent to which the differences influence the actual selection of applicants and what implications can be drawn from them for the use of psychometric questionnaires as part of study-place allocation procedures at Medical Schools.

  10. Improving observational study estimates of treatment effects using joint modeling of selection effects and outcomes: the case of AAA repair.

    Science.gov (United States)

    O'Malley, A James; Cotterill, Philip; Schermerhorn, Marc L; Landon, Bruce E

    2011-12-01

    When 2 treatment approaches are available, there are likely to be unmeasured confounders that influence choice of procedure, which complicates estimation of the causal effect of treatment on outcomes using observational data. To estimate the effect of endovascular (endo) versus open surgical (open) repair, including possible modification by institutional volume, on survival after treatment for abdominal aortic aneurysm, accounting for observed and unobserved confounding variables. Observational study of data from the Medicare program using a joint model of treatment selection and survival given treatment to estimate the effects of type of surgery and institutional volume on survival. We studied 61,414 eligible repairs of intact abdominal aortic aneurysms during 2001 to 2004. The outcome, perioperative death, is defined as in-hospital death or death within 30 days of operation. The key predictors are use of endo, transformed endo and open volume, and endo-volume interactions. There is strong evidence of nonrandom selection of treatment with potential confounding variables including institutional volume and procedure date, variables not typically adjusted for in clinical trials. The best fitting model included heterogeneous transformations of endo volume for endo cases and open volume for open cases as predictors. Consistent with our hypothesis, accounting for unmeasured selection reduced the mortality benefit of endo. The effect of endo versus open surgery varies nonlinearly with endo and open volume. Accounting for institutional experience and unmeasured selection enables better decision-making by physicians making treatment referrals, investigators evaluating treatments, and policy makers.

  11. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  12. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  13. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  14. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Boards and the Selection Procedures Post Gender Quotas

    DEFF Research Database (Denmark)

    Arna Arnardóttir, Auður; Sigurjonsson, Olaf; Terjesen, Siri

    Purpose: Director Selection process can greatly effect board’s behavior and effectiveness and ultimately the firm’s performance and outcome. Director selection practices are hence important and yet underresearched topic, especially practices applied in the wake of gender quota legislations....... The purpose of this paper is to contribute to the extant literature by gaining greater understanding into how new female board members are recruited and selected when demand for one gender is high. Design/methodology/approach: Mixed research methodology was applied. Questionnaire (N=260) and in......-depth interviews (N=20) were conducted with Icelandic non-executive board directors, to identify the selection criteria that are deemed most important when selecting the new female director candidates taking seat on boards in the wake of gender quota legislation and compare those practices with previous selection...

  16. Extended reviewing or the role of potential siting cantons in the ongoing Swiss site selection procedure ('Sectoral Plan')

    International Nuclear Information System (INIS)

    Flueeler, Thomas

    2014-01-01

    The disposition of nuclear waste in Switzerland has a long-standing and sinuous history reflecting its complex socio-technical nature (Flueeler, 2006). Upon the twofold failure to site a repository for low- and intermediate-level radioactive waste at Wellenberg during the 1990's and 2000's, it was recognised that the respective site selections had not been fully transparent. The Swiss government, the Federal Council, accepted the lesson and, after an extensive nationwide consultation at that, established a new site selection process 'from scratch': a systematic, stepwise, traceable, fair and binding procedure with a safety-first approach, yet extensively participatory. The so-called Sectoral Plan for Deep Geological Repositories guarantees the inclusion of the affected and concerned cantons and communities, as well as the relevant authorities in neighbouring countries from an early stage (Swiss Nuclear Energy Act, 2003; BFE, 2008). This contribution shares experience and insights in the ongoing procedure from a cantonal point of view that is an intermediate position between national needs and regional concerns, and with technical regulatory expertise between highly specialised experts and involved publics. (authors)

  17. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  18. Repetition priming in selective attention

    DEFF Research Database (Denmark)

    Ásgeirsson, Árni Gunnar; Kristjánsson, Árni; Bundesen, Claus

    2015-01-01

    . Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure...

  19. Used-habitat calibration plots: A new procedure for validating species distribution, resource selection, and step-selection models

    Science.gov (United States)

    Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason

    2018-01-01

    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.

  20. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  1. Conference report: 2012 Repository Symposium. Final storage in Germany. New start - ways and consequences of the site selection procedure

    International Nuclear Information System (INIS)

    Kettler, John

    2012-01-01

    The Aachen Institute for Nuclear Training invited participants to the 3-day '2012 Repository Symposium - Final Storage in Germany' held in Bonn. The subtitle of the event, 'New Start - Ways and Consequences of the Site Selection Procedure,' expressed the organizers' summary that the Repository Finding Act currently under discussion did not give rise to any expectation of a repository for high-level radioactive waste before 2080. The symposium was attended by more than 120 persons from Germany and abroad. They discussed the basic elements of the site selection procedure and its consequences on the basis of the draft so far known to the public. While extensive public participation is envisaged for the stage of finding a repository, this does not apply to the draft legislation in the same way. The legal determinations are negotiated in a small circle by the political parties and the state governments. Michael Sailer (Oeko-Institut e.V.) holds that agreement on a repository finding act is urgent. Prof. Dr. Bruno Thomauske (RWTH Aachen) arrives at the conclusion mentioned above, that no repository for high-level radioactive waste can start operation before 2080 on the basis of the Repository Finding Act. Dr. Bettina Keienburg, attorney at law, in her paper drew attention to the points of dispute in the draft legislation with regard to changes in competency of public authorities. The draft law indicated a clear shift of competency for finding a repository from the Federal Office for Radiation Protection to a federal agency yet to be set up. Prof. Dr. Christoph Moench outlined the deficiencies of the draft legislation in matters of refinancing and the polluter-pays principle. Among the tentative solutions discussed it was above all the Swedish model which was acclaimed most widely. (orig.)

  2. The Econometric Procedures of Specific Transaction Identification

    Directory of Open Access Journals (Sweden)

    Doszyń Mariusz

    2017-06-01

    Full Text Available The paper presents the econometric procedures of identifying specific transactions, in which atypical conditions or attributes may occur. These procedures are based on studentized and predictive residuals of the accordingly specified econometric models. The dependent variable is a unit transactional price, and explanatory variables are both the real properties’ attributes and accordingly defined artificial binary variables. The utility of the proposed method has been verified by means of a real market data base. The proposed procedures can be helpful during the property valuation process, making it possible to reject real properties that are specific (both from the point of view of the transaction conditions and the properties’ attributes and, consequently, to select an appropriate set of similar attributes that are essential for the valuation process.

  3. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  4. A PROCEDURAL SOLUTION TO MODEL ROMAN MASONRY STRUCTURES

    Directory of Open Access Journals (Sweden)

    V. Cappellini

    2013-07-01

    Full Text Available The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac – PAM, developed by IGN (Paris. We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France. Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick, this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects and metric measures for restoration purposes.

  5. a Procedural Solution to Model Roman Masonry Structures

    Science.gov (United States)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  6. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  7. Site selection under the underground geologic store plan. Procedures of selecting underground geologic stores as disputed by society, science, and politics. Site selection rules; Mit dem Sachplan Geologische Tiefenlager auf Standortsuche. Auswahlverfahren fuer geologische Tiefenlager im Spannungsfeld von Gesellschaft, Wissenschaft und Politik, Regeln fuer die Standortsuche

    Energy Technology Data Exchange (ETDEWEB)

    Aebersold, M. [Bundesamt fuer Energie BFE, Sektion Entsorgung Radioaktive Abfaelle, Bern (Switzerland)

    2008-10-15

    The new Nuclear Power Act and the Nuclear Power Ordinance of 2005 are used in Switzerland to select a site of an underground geologic store for radioactive waste in a substantive planning procedure. The ''Underground Geologic Store Substantive Plan'' is to ensure the possibility to build underground geologic stores in an independent, transparent and fair procedure. The Federal Office for Energy (BFE) is the agency responsible for this procedure. The ''Underground Geologic Store'' Substantive Plan comprises these principles: - The long term protection of people and the environment enjoys priority. Aspects of regional planning, economics and society are of secondary importance. - Site selection is based on the waste volumes arising from the five nuclear power plants currently existing in Switzerland. The Substantive Plan is no precedent for or against future nuclear power plants. - A transparent and fair procedure is an indispensable prerequisite for achieving the objectives of a Substantive Plan, i.e., finding accepted sites for underground geologic stores. The Underground Geologic Stores Substantive Plan is arranged in two parts, a conceptual part defining the rules of the selection process, and an implementation part documenting the selection process step by step and, in the end, naming specific sites of underground geologic stores in Switzerland. The objective is to be able to commission underground geologic stores in 25 or 35 years' time. In principle, 2 sites are envisaged, one for low and intermediate level waste, and one for high level waste. The Swiss Federal Council approved the conceptual part on April 2, 2008. This marks the beginning of the implementation phase and the site selection process proper. (orig.)

  8. Penerapan Model Pembelajaran Conceptual Understanding Procedures (CUPS sebagai Upaya Mengatasi Miskonsepsi Matematis Siswa

    Directory of Open Access Journals (Sweden)

    Asri Gita

    2018-01-01

    Full Text Available Kesalahan dalam memahami konsep menjadi salah satu faktor yang menyebabkan miskonsepsi pada pelajaran matematika. Miskonsepsi pada materi bangun datar disebabkan oleh cara belajar siswa yang hanya menghafalkan bentuk dasar tanpa memahami hubungan antar bangun datar dan sifat-sifatnya. Upaya yang dilakukan dalam mengatasi miskonsepsi tersebut adalah dengan menerapkan pembelajaran konstruktivis. Salah satu model pembelajaran konstruktivis adalah Conceptual Understanding Procedures (CUPs. Tujuan dari penelitian ini adalah untuk mengetahui penerapan model pembelajaran Conceptual Understanding Procedures (CUPs sebagai upaya mengatasi miskonsepsi matematis siswa pada materi sifat-sifat bangun datar segiempat. Subjek penelitian adalah 12 orang siswa SMP yang mengalami miskonsepsi pada materi sifat-sifat bangun datar segiempat. Teknik pengumpulan data pada penelitian ini melalui tes, video, observasi, dan wawancara. Validitas dan reliabilitas data melalui credibility, dependability, transferability, dan confirmability. Hasil dari penelitian ini menunjukkan bahwa penerapan model pembelajaran Conceptual Understanding Procedures (CUPs yang terdiri dari fase individu, fase kelompok triplet, dan fase interpretasi seluruh kelas dapat mengatasi miskonsepsi siswa pada materi sifat-sifat bangun datar segiempat. Perubahan miskonsepsi siswa juga dapat dilihat dari nilai tes yang mengalami peningkatan nilai berdasarkan nilai tes awal dan tes akhir siswa. Kata Kunci: Conceptual Understanding Procedures (CUPs, miskonsepsi, segiempat.   ABSTRACT Mistakes in understanding the concept became one of the factors that led to misconceptions in mathematics. The misconceptions in plane shapes are caused by the way of learning of students who only memorize the basic form without understanding the relationship between the plane shapes and its properties. Efforts made in overcoming these misconceptions is to apply constructivist learning. One of the constructivist learning

  9. Bootstrap procedure in the quasinuclear quark model

    International Nuclear Information System (INIS)

    Anisovich, V.V.; Gerasyuta, S.M.; Keltuyala, I.V.

    1983-01-01

    The scattering amplitude for quarks (dressed quarks of a single flavour, and three colours) is obtained by means of a bootstrap procedure with introdUction of an initial paint-wise interaction due to a heavy gluon exchange. The obtained quasi-nuclear model (effective short-range interaction in the S-wave states) has reasonable properties: there exist colourless meson states Jsup(p)=0sup(-), 1 - ; there are no bound states in coloured channels, a virtual diquark level Jsup(p)=1sup(+) appears in the coloured state anti 3sub(c)

  10. Calibration procedure for a potato crop growth model using information from across Europe

    DEFF Research Database (Denmark)

    Heidmann, Tove; Tofteng, Charlotte; Abrahamsen, Per

    2008-01-01

    for adaptation of the Daisy model to new potato varieties or for the improvement of the existing parameter set. The procedure is then, as a starting point, to focus the calibration process on the recommended list of parameters to change. We demonstrate this approach by showing the procedure for recalibrating...... three varieties using all relevant data from the sites. We believe these new parameterisations to be more robust, because they indirectly were based on information from the six different sites. We claim that this procedure combines both local and specific modeller expertise in a way that results in more......In the FertOrgaNic EU project, 3 years of field experiments with drip irrigation and fertigation were carried out at six different sites across Europe, involving seven different varieties of potato. The Daisy model, which simulates plant growth together with water and nitrogen dynamics, was used...

  11. Semantic Modeling of Administrative Procedures from a Spanish Regional Public Administration

    Directory of Open Access Journals (Sweden)

    Francisco José Hidalgo López

    2018-02-01

    Full Text Available Over the past few years, Public Administrations have been providing systems for procedures and files electronic processing to ensure compliance with regulations and provide public services to citizens. Although each administration provides similar services to their citizens, these systems usually differ from the internal information management point of view since they usually come from different products and manufacturers. The common framework that regulations demand, and that Public Administrations must respect when processing electronic files, provides a unique opportunity for the development of intelligent agents in the field of administrative processes. However, for this development to be truly effective and applicable to the public sector, it is necessary to have a common representation model for these administrative processes. Although a lot of work has already been done in the development of public information reuse initiatives and common vocabularies standardization, this has not been carried out at the processes level. In this paper, we propose a semantic representation model of both processes models and processes for Public Administrations: the procedures and administrative files. The goal is to improve public administration open data initiatives and help to develop their sustainability policies, such as improving decision-making procedures and administrative management sustainability. As a case study, we modelled public administrative processes and files in collaboration with a Regional Public Administration in Spain, the Principality of Asturias, which enabled access to its information systems, helping the evaluation of our approach.

  12. Monte Carlo modeling of time-resolved fluorescence for depth-selective interrogation of layered tissue.

    Science.gov (United States)

    Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A

    2011-11-01

    Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.

  13. A selective overview of feature screening for ultrahigh-dimensional data.

    Science.gov (United States)

    JingYuan, Liu; Wei, Zhong; RunZe, L I

    2015-10-01

    High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data. Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many high-dimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures.

  14. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  15. The Selection of Procedures in One-stage Urethroplasty for Treatment of Coexisting Urethral Strictures in Anterior and Posterior Urethra.

    Science.gov (United States)

    Lv, XiangGuo; Xu, Yue-Min; Xie, Hong; Feng, Chao; Zhang, Jiong

    2016-07-01

    To explore selection of the procedures in one-stage urethroplasty for treatment of coexisting urethral strictures in the anterior and posterior urethra. Between 2008 and 2014, a total of 27 patients with existing strictures simultaneously at anterior urethra and posterior urethra were treated in our hospital. Two types of procedures were selected for treatment of the anterior urethral strictures. A penile skin flap and the lingual mucosa were used for augmented urethroplasty in 20 and 7 cases, respectively. Three types of procedures, namely, non-transecting end-to-end urethral anastomosis (n = 3), traditional end-to-end urethral anastomosis (n = 17), other grafts substitution urethroplasty, including pedicle scrotal skin urethroplasty (n = 2), and lingual mucosal graft urethroplasty (n = 5), were utilized in the treatment of posterior urethral strictures. The patients were mean followed up 30 months with an overall success rate of 88.9%. The majority of the patients exhibited wide patent urethras on retrograde urethrography and the patients' urinary peak flow ranged from 14.2 to 37.9 ml/s. Complications developed in 3 patients (11.1%). Of the 17 patients who underwent traditional urethral end-to-end anastomosis, urethral strictures occurred in 2 patients at 4 and 6 months after the operation. These patients achieved a satisfactory voiding function after salvage pedicle scrotal skin urethroplasty. A urethral pseudodiverticulum was observed in another patient 9 months after pedicle penile flap urethroplasty; and after a salvage procedure, he regained excellent voiding function. Synchronous anterior and posterior strictures can be successfully reconstructed with a combination of substitution and anastomotic urethroplasty techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Considerations for the selection of an applicable energy efficiency test procedure for electric motors in Malaysia: Lessons for other developing countries

    International Nuclear Information System (INIS)

    Yanti, P.A.A.; Mahlia, T.M.I.

    2009-01-01

    Electric motors are a major energy-consuming appliance in the industrial sector. According to a survey, electric motors account for more than 70% of the total growth from 1991 to 2004 in electricity consumption in this sector in Malaysia. To reduce electricity consumption, Malaysia should consider resetting the minimum energy efficiency standards for electric motors sometime in the coming year. The first step towards adopting energy efficiency standards is the creation of a procedure for testing and rating equipment. An energy test procedure is the technical foundation for all energy efficiency standards, energy labels and other related programs. The test conditions in the test procedure must represent the conditions of the country. This paper presents the process for the selection of an energy test procedure for electric motors in Malaysia based on the country's conditions and requirements. The adoption of test procedures for electric motors internationally by several countries is also discussed in this paper. Even though the paper only discusses the test procedure for electric motors in Malaysia, the methods can be directly applied in other countries without major modifications.

  17. Genomic selection models for directional dominance: an example for litter size in pigs.

    Science.gov (United States)

    Varona, Luis; Legarra, Andrés; Herring, William; Vitezica, Zulma G

    2018-01-26

    The quantitative genetics theory argues that inbreeding depression and heterosis are founded on the existence of directional dominance. However, most procedures for genomic selection that have included dominance effects assumed prior symmetrical distributions. To address this, two alternatives can be considered: (1) assume the mean of dominance effects different from zero, and (2) use skewed distributions for the regularization of dominance effects. The aim of this study was to compare these approaches using two pig datasets and to confirm the presence of directional dominance. Four alternative models were implemented in two datasets of pig litter size that consisted of 13,449 and 11,581 records from 3631 and 2612 sows genotyped with the Illumina PorcineSNP60 BeadChip. The models evaluated included (1) a model that does not consider directional dominance (Model SN), (2) a model with a covariate b for the average individual homozygosity (Model SC), (3) a model with a parameter λ that reflects asymmetry in the context of skewed Gaussian distributions (Model AN), and (4) a model that includes both b and λ (Model Full). The results of the analysis showed that posterior probabilities of a negative b or a positive λ under Models SC and AN were higher than 0.99, which indicate positive directional dominance. This was confirmed with the predictions of inbreeding depression under Models Full, SC and AN, that were higher than in the SN Model. In spite of differences in posterior estimates of variance components between models, comparison of models based on LogCPO and DIC indicated that Model SC provided the best fit for the two datasets analyzed. Our results confirmed the presence of positive directional dominance for pig litter size and suggested that it should be taken into account when dominance effects are included in genomic evaluation procedures. The consequences of ignoring directional dominance may affect predictions of breeding values and can lead to biased

  18. Combining epidemiologic and biostatistical tools to enhance variable selection in HIV cohort analyses.

    Directory of Open Access Journals (Sweden)

    Christopher Rentsch

    Full Text Available BACKGROUND: Variable selection is an important step in building a multivariate regression model for which several methods and statistical packages are available. A comprehensive approach for variable selection in complex multivariate regression analyses within HIV cohorts is explored by utilizing both epidemiological and biostatistical procedures. METHODS: Three different methods for variable selection were illustrated in a study comparing survival time between subjects in the Department of Defense's National History Study and the Atlanta Veterans Affairs Medical Center's HIV Atlanta VA Cohort Study. The first two methods were stepwise selection procedures, based either on significance tests (Score test, or on information theory (Akaike Information Criterion, while the third method employed a Bayesian argument (Bayesian Model Averaging. RESULTS: All three methods resulted in a similar parsimonious survival model. Three of the covariates previously used in the multivariate model were not included in the final model suggested by the three approaches. When comparing the parsimonious model to the previously published model, there was evidence of less variance in the main survival estimates. CONCLUSIONS: The variable selection approaches considered in this study allowed building a model based on significance tests, on an information criterion, and on averaging models using their posterior probabilities. A parsimonious model that balanced these three approaches was found to provide a better fit than the previously reported model.

  19. Computational Modelling in Development of a Design Procedure for Concrete Road

    Directory of Open Access Journals (Sweden)

    B. Novotný

    2000-01-01

    Full Text Available The computational modelling plays a decisive part in development of a new design procedure for concrete pavement by quantifying impacts of individual design factors. In the present paper, the emphasis is placed on the modelling of a structural response of the jointed concrete pavement as a system of interacting rectangular slabs transferring wheel loads into an elastic layered subgrade. The finite element plate analysis is combined with the assumption of a linear contact stress variation over triangular elements of the contact region division. The linking forces are introduced to model the load transfer across the joints. The unknown contact stress nodal intensities as well as unknown linking forces are determined in an iterative way to fulfil slab/foundation and slab/slab contact conditions. The temperature effects are also considered and space is reserved for modelling of inelastic and additional environmental effects. It is pointed out that pavement design should be based on full data of pavement stressing, in contradiction to procedures accounting only for the axle load induced stresses.

  20. The use of flow models for design of plant operating procedures

    International Nuclear Information System (INIS)

    Lind, M.

    1982-03-01

    The report describe a systematic approach to the design of operating procedures or sequence automatics for process plant control. It is shown how flow models representing the topology of mass and energy flows on different levels of function provide plant information which is important for the considered design problem. The modelling methodology leads to the definition of three categories of control tasks. Two tasks relate to the regulation and control of changes of levels and flows of mass and energy in a system within a defined mode of operation. The third type relate to the control actions necessary for switching operations involved in changes of operating mode. These control tasks are identified for a given plant as part of the flow modelling activity. It is discussed how the flow model deal with the problem of assigning control task precedence in time eg. during start-up or shut-down operations. The method may be a basis for providing automated procedure support to the operator in unforeseen situations or may be a tool for control design. (auth.)

  1. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  2. EBTR design-point selection

    International Nuclear Information System (INIS)

    Krakowski, R.A.; Bathke, C.G.

    1981-01-01

    The procedure used to select the design point for the ELMO Bumpy Torus Reactor (EBTR) study is described. The models used in each phase of the selection process are described, with an emphasis placed on the parametric design curves produced by each model. The tradeoffs related to burn physics, stability/equilibrium, electron-ring physics, and magnetics design are discussed. The resulting design point indicates a plasma with a 35-m major radius and a 1-m minor radium operating at an average core-plasma beta of 0.17, which at approx. 30 keV produces an average neutron wall loading of 1.4 MW/m 2 while maintaining key magnet (< 10 T) and total power (less than or equal to 4000 MWt) constraints

  3. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  4. Bias in the Listeria monocytogenes enrichment procedure: Lineage 2 strains outcompete lineage 1 strains in University of Vermont selective enrichments

    DEFF Research Database (Denmark)

    Bruhn, Jesper Bartholin; Vogel, Birte Fonnesbech; Gram, Lone

    2005-01-01

    compounds in UVM I and II influenced this bias. The results of the present study demonstrate that the selective procedures used for isolation of L. monocytogenes may not allow a true representation of the types present in foods. Our results could have a significant impact on epidemiological studies...

  5. Probabilistic Modelling of Timber Material Properties

    DEFF Research Database (Denmark)

    Nielsen, Michael Havbro Faber; Köhler, Jochen; Sørensen, John Dalsgaard

    2001-01-01

    The probabilistic modeling of timber material characteristics is considered with special emphasis to the modeling of the effect of different quality control and selection procedures used as means for grading of timber in the production line. It is shown how statistical models may be established...... on the basis of the same type of information which is normally collected as a part of the quality control procedures and furthermore, how the efficiency of different control procedures may be compared. The tail behavior of the probability distributions of timber material characteristics play an important role...... such that they may readily be applied in structural reliability analysis and the format appears to be appropriate for codification purposes of quality control and selection for grading procedures...

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. PREFACE: Special section featuring selected papers from the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors Special section featuring selected papers from the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors

    Science.gov (United States)

    Granados, Xavier; Sánchez, Àlvar; López-López, Josep

    2012-10-01

    The development of superconducting applications and superconducting engineering requires the support of consistent tools which can provide models for obtaining a good understanding of the behaviour of the systems and predict novel features. These models aim to compute the behaviour of the superconducting systems, design superconducting devices and systems, and understand and test the behavior of the superconducting parts. 50 years ago, in 1962, Charles Bean provided the superconducting community with a model efficient enough to allow the computation of the response of a superconductor to external magnetic fields and currents flowing through in an understandable way: the so called critical-state model. Since then, in addition to the pioneering critical-state approach, other tools have been devised for designing operative superconducting systems, allowing integration of the superconducting design in nearly standard electromagnetic computer-aided design systems by modelling the superconducting parts with consideration of time-dependent processes. In April 2012, Barcelona hosted the 3rd International Workshop on Numerical Modelling of High Temperature Superconductors (HTS), the third in a series of workshops started in Lausanne in 2010 and followed by Cambridge in 2011. The workshop reflected the state-of-the-art and the new initiatives of HTS modelling, considering mathematical, physical and technological aspects within a wide and interdisciplinary scope. Superconductor Science and Technology is now publishing a selection of papers from the workshop which have been selected for their high quality. The selection comprises seven papers covering mathematical, physical and technological topics which contribute to an improvement in the development of procedures, understanding of phenomena and development of applications. We hope that they provide a perspective on the relevance and growth that the modelling of HTS superconductors has achieved in the past 25 years.

  8. Weak Galilean invariance as a selection principle for coarse-grained diffusive models.

    Science.gov (United States)

    Cairoli, Andrea; Klages, Rainer; Baule, Adrian

    2018-05-29

    How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.

  9. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  10. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    incomplete and many relationships among the model ideas had not been well established by the end of the study. Most of them did not treat the natural selection model as a whole but only focused on some ideas within the model. Very few of them could scientifically apply the natural selection model to interpret other evolutionary phenomena. The findings about participating students' programming processes revealed these processes were composed of consecutive programming cycles. The cycle typically included posing a task, constructing and running program codes, and examining the resulting simulation. Students held multiple ideas and applied various programming strategies in these cycles. Students were involved in MBI at each step of a cycle. Three types of ideas, six programming strategies and ten MBI actions were identified out of the processes. The relationships among these ideas, strategies and actions were also identified and described. Findings suggested that ABPM activities could support MBI by (1) exposing students' personal models and understandings, (2) provoking and supporting a series of model-based inquiry activities, such as elaborating target phenomena, abstracting patterns, and revising conceptual models, and (3) provoking and supporting tangible and productive conversations among students, as well as between the instructor and students. Findings also revealed three programming behaviors that appeared to impede productive MBI, including (1) solely phenomenon-orientated programming, (2) transplanting program codes, and (3) blindly running procedures. Based on the findings, I propose a general modeling process in ABPM activities, summarize the ways in which MBI can be supported in ABPM activities and constrained by multiple factors, and suggest the implications of this study in the future ABPM-assisted science instructional design and research.

  11. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  12. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  13. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Algorithm for Video Summarization of Bronchoscopy Procedures

    Directory of Open Access Journals (Sweden)

    Leszczuk Mikołaj I

    2011-12-01

    Full Text Available Abstract Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions

  15. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  16. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  17. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  18. Adapting a Markov Monte Carlo simulation model for forecasting the number of Coronary Artery Revascularisation Procedures in an era of rapidly changing technology and policy

    Directory of Open Access Journals (Sweden)

    Knuiman Matthew

    2008-06-01

    Full Text Available Abstract Background Treatments for coronary heart disease (CHD have evolved rapidly over the last 15 years with considerable change in the number and effectiveness of both medical and surgical treatments. This period has seen the rapid development and uptake of statin drugs and coronary artery revascularization procedures (CARPs that include Coronary Artery Bypass Graft procedures (CABGs and Percutaneous Coronary Interventions (PCIs. It is difficult in an era of such rapid change to accurately forecast requirements for treatment services such as CARPs. In a previous paper we have described and outlined the use of a Markov Monte Carlo simulation model for analyzing and predicting the requirements for CARPs for the population of Western Australia (Mannan et al, 2007. In this paper, we expand on the use of this model for forecasting CARPs in Western Australia with a focus on the lack of adequate performance of the (standard model for forecasting CARPs in a period during the mid 1990s when there were considerable changes to CARP technology and implementation policy and an exploration and demonstration of how the standard model may be adapted to achieve better performance. Methods Selected key CARP event model probabilities are modified based on information relating to changes in the effectiveness of CARPs from clinical trial evidence and an awareness of trends in policy and practice of CARPs. These modified model probabilities and the ones obtained by standard methods are used as inputs in our Markov simulation model. Results The projected numbers of CARPs in the population of Western Australia over 1995–99 only improve marginally when modifications to model probabilities are made to incorporate an increase in effectiveness of PCI procedures. However, the projected numbers improve substantially when, in addition, further modifications are incorporated that relate to the increased probability of a PCI procedure and the reduced probability of a CABG

  19. Adapting a Markov Monte Carlo simulation model for forecasting the number of coronary artery revascularisation procedures in an era of rapidly changing technology and policy.

    Science.gov (United States)

    Mannan, Haider R; Knuiman, Matthew; Hobbs, Michael

    2008-06-25

    Treatments for coronary heart disease (CHD) have evolved rapidly over the last 15 years with considerable change in the number and effectiveness of both medical and surgical treatments. This period has seen the rapid development and uptake of statin drugs and coronary artery revascularization procedures (CARPs) that include Coronary Artery Bypass Graft procedures (CABGs) and Percutaneous Coronary Interventions (PCIs). It is difficult in an era of such rapid change to accurately forecast requirements for treatment services such as CARPs. In a previous paper we have described and outlined the use of a Markov Monte Carlo simulation model for analyzing and predicting the requirements for CARPs for the population of Western Australia (Mannan et al, 2007). In this paper, we expand on the use of this model for forecasting CARPs in Western Australia with a focus on the lack of adequate performance of the (standard) model for forecasting CARPs in a period during the mid 1990s when there were considerable changes to CARP technology and implementation policy and an exploration and demonstration of how the standard model may be adapted to achieve better performance. Selected key CARP event model probabilities are modified based on information relating to changes in the effectiveness of CARPs from clinical trial evidence and an awareness of trends in policy and practice of CARPs. These modified model probabilities and the ones obtained by standard methods are used as inputs in our Markov simulation model. The projected numbers of CARPs in the population of Western Australia over 1995-99 only improve marginally when modifications to model probabilities are made to incorporate an increase in effectiveness of PCI procedures. However, the projected numbers improve substantially when, in addition, further modifications are incorporated that relate to the increased probability of a PCI procedure and the reduced probability of a CABG procedure stemming from changed CARP preference

  20. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  2. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA

    Directory of Open Access Journals (Sweden)

    Vasil Kushnir

    2016-05-01

    -th degree of a square matrix, to calculate matrix exponent, etc. The author creates four basic forms of canonical models of matrices and shows how to design matrices of similar transformations to these four forms. We introduce the programs-procedures for square matrices construction based on the selected models of canonical matrices. Then you can create a certain amount of various square matrices based on canonical matrix models, it allows to use individual learning technologies. The use of Maple-technology allows to automate the cumbersome and complex procedures for finding the transformation matrices of canonical form of a matrix, values of matrices functions, etc., which not only saves time but also attracts attention and efforts on understanding the above mentioned fundamental concepts of linear algebra and procedures for investigation of their properties. All these create favorable conditions for the use of fundamental concepts of linear algebra in scientific and research work of students and undergraduates using Maple-technology

  3. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  4. Equilibrium and nonequilibrium attractors for a discrete, selection-migration model

    Science.gov (United States)

    James F. Selgrade; James H. Roberds

    2003-01-01

    This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...

  5. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    Energy Technology Data Exchange (ETDEWEB)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in

  6. A Gambler's Model of Natural Selection.

    Science.gov (United States)

    Nolan, Michael J.; Ostrovsky, David S.

    1996-01-01

    Presents an activity that highlights the mechanism and power of natural selection. Allows students to think in terms of modeling a biological process and instills an appreciation for a mathematical approach to biological problems. (JRH)

  7. 47 CFR 1.1604 - Post-selection hearings.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  8. Motivation of medical students: selection by motivation or motivation by selection.

    Science.gov (United States)

    Wouters, Anouk; Croiset, Gerda; Galindo-Garre, Francisca; Kusurkar, Rashmi A

    2016-01-29

    Medical schools try to implement selection procedures that will allow them to select the most motivated students for their programs. Though there is a general feeling that selection stimulates student motivation, conclusive evidence for this is lacking. The current study aims to use the perspective of Self-determination Theory (SDT) of motivation as a lens to examine how medical students' motivation differs in relation to different selection procedures. The hypotheses were that 1) selected students report higher strength and autonomous motivation than non-selected students, and 2) recently selected students report higher strength and autonomous motivation than non-selected students and students who were selected longer ago. First- (Y1) and fourth-year (Y4) medical students in the six-year regular programme and first-year students in the four-year graduate entry programme (GE) completed questionnaires measuring motivation strength and type (autonomous-AM, controlled-CM). Scores were compared between students admitted based on selection, lottery or top pre-university GPA (top GPA) using ANCOVAs. Selected students' answers on open-ended questions were analysed using inductive thematic analysis to identify reasons for changes in motivation. The response rate was 61.4 % (n = 357). Selected students (Y1, Y4 and GE) reported a significantly higher strength of motivation than non-selected students (Y1 and Y4 lottery and top GPA) (p motivation as they felt autonomous, competent and that they belonged to a special group. These reported reasons are in alignment with the basic psychological needs described by Self-Determination Theory as important in enhancing autonomous motivation. A comprehensive selection procedure, compared to less demanding admission procedures, does not seem to yield a student population which stands out in terms of autonomous motivation. The current findings indicate that selection might temporarily enhance students' motivation. The mechanism

  9. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  10. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  11. Probabilistic Modeling of Graded Timber Material Properties

    DEFF Research Database (Denmark)

    Faber, M. H.; Köhler, J.; Sørensen, John Dalsgaard

    2004-01-01

    The probabilistic modeling of timber material characteristics is considered with special emphasis to the modeling of the effect of different quality control and selection procedures used as means for quality grading in the production line. It is shown how statistical models may be established...... on the basis of the same type of information which is normally collected as a part of the quality control procedures and furthermore, how the efficiency of different control procedures may be quantified and compared. The tail behavior of the probability distributions of timber material characteristics plays...... such that they may readily be applied in structural reliability analysis and their format appears to be appropriate for codification purposes of quality control and selection for grading procedures....

  12. Selective Retrograde Venous Revascularization of the Myocardium when PCI or CABG Is Impossible: Investigation in a Porcine Model

    DEFF Research Database (Denmark)

    Møller, Christian H; Nørgaard, Martin A; Gøtze, Jens P

    2008-01-01

    We investigated the possibility of nourishing the myocardium through selective retrograde coronary venous bypass grafting (CVBG) with an off-pump technique and evaluated various methods of monitoring the physiological effects of this procedure. In a porcine model, the left internal mammary artery...... tension decreased, but with time some recovery was seen. Cardiac troponin T was elevated. Histological analysis showed ischemic changes. In control pigs, microdialysis was performed for 1.5 hours up to LAD artery ligation, after which all pigs died in ventricular fibrillation arrest. No increase...

  13. Exploration and safety evaluations of salt formations and site selection procedures; Erkundung und Sicherheitsbewertung von Salzformationen und Standortauswahlverfahren

    Energy Technology Data Exchange (ETDEWEB)

    Krapf, Eva Barbara

    2016-12-12

    In 2011 the final decision for the withdrawal from the nuclear energy program was decided in the Federal Republic of Germany. The majority of the produced radioactive waste originate in the operation as well as in the decommissioning and dismantling of nuclear facilities. The long-term containment of especially heat-developing and high-level waste in an underground disposal facility is pursued. The Site Selection Act (StandAG), passed in 2013, defined further procedural steps as well as responsibilities and the way of public participation during the site selection. In this context the newly founded Commission Storage of Highly Radioactive Waste was assigned with the task of giving relevant recommendations based on their investigation of specific aspects and fundamental questions. The objective of this procedure is the selection of the site that can provide the best possible safety for humans and the environment during the defined period of one million years. The Commissions' final report was published in July 2016. In this thesis a possible approach for exploring sites in connection with safety investigations is recommended. The site selection procedure described in the StandAG represents the basis for the considerations. Geoscientific exclusion criteria, minimum requirements as well as weighing criteria can be developed regarding the relevant geoscientific and climatic changes during the defined period of one million years. In contrast to the recommendations made by the Commission Storage of Highly Radioactive Waste no previously existing report has been revised and adapted. Rather, all issues relevant for the long-term containment of radioactive waste in a disposal facility had been newly developed. The considerations are related to salt domes as host rock. Furthermore, according to the StandAG preliminary safety investigations are required in every step of the site selection. The recommendations made in this thesis concerning content and feasibility of

  14. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  15. Spiral model of procedural cycle of educational process management

    Directory of Open Access Journals (Sweden)

    Bezrukov Valery I.

    2016-01-01

    Full Text Available The article analyzes the nature and characteristics of the spiral model Procedure educational systems management cycle. The authors identify patterns between the development of information and communication technologies and the transformation of the education management process, give the characteristics of the concept of “information literacy” and “Media Education”. Consider the design function, determine its potential in changing the traditional educational paradigm to the new - information.

  16. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  17. A Comparison of Exposure Control Procedures in CAT Systems Based on Different Measurement Models for Testlets

    Science.gov (United States)

    Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven

    2013-01-01

    This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…

  18. Generalisability of a composite student selection programme

    DEFF Research Database (Denmark)

    O'Neill, Lotte Dyhrberg; Korsholm, Lars; Wallstedt, Birgitta

    2009-01-01

    format); general knowledge (multiple-choice test), and a semi-structured admission interview. The aim of this study was to estimate the generalisability of a composite selection. METHODS: Data from 307 applicants who participated in the admission to medicine in 2007 were available for analysis. Each...... admission parameter was double-scored using two random, blinded and independent raters. Variance components for applicant, rater and residual effects were estimated for a mixed model with the restricted maximum likelihood (REML) method. The reliability of obtained applicant ranks (G coefficients......) was calculated for individual admission criteria and for composite admission procedures. RESULTS: A pre-selection procedure combining qualification and motivation scores showed insufficient generalisability (G = 0.45). The written motivation in particular, displayed low generalisability (G = 0.10). Good...

  19. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  20. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  1. An Evaluation of the Use of Statistical Procedures in Soil Science

    Directory of Open Access Journals (Sweden)

    Laene de Fátima Tavares

    2016-01-01

    Full Text Available ABSTRACT Experimental statistical procedures used in almost all scientific papers are fundamental for clearer interpretation of the results of experiments conducted in agrarian sciences. However, incorrect use of these procedures can lead the researcher to incorrect or incomplete conclusions. Therefore, the aim of this study was to evaluate the characteristics of the experiments and quality of the use of statistical procedures in soil science in order to promote better use of statistical procedures. For that purpose, 200 articles, published between 2010 and 2014, involving only experimentation and studies by sampling in the soil areas of fertility, chemistry, physics, biology, use and management were randomly selected. A questionnaire containing 28 questions was used to assess the characteristics of the experiments, the statistical procedures used, and the quality of selection and use of these procedures. Most of the articles evaluated presented data from studies conducted under field conditions and 27 % of all papers involved studies by sampling. Most studies did not mention testing to verify normality and homoscedasticity, and most used the Tukey test for mean comparisons. Among studies with a factorial structure of the treatments, many had ignored this structure, and data were compared assuming the absence of factorial structure, or the decomposition of interaction was performed without showing or mentioning the significance of the interaction. Almost none of the papers that had split-block factorial designs considered the factorial structure, or they considered it as a split-plot design. Among the articles that performed regression analysis, only a few of them tested non-polynomial fit models, and none reported verification of the lack of fit in the regressions. The articles evaluated thus reflected poor generalization and, in some cases, wrong generalization in experimental design and selection of procedures for statistical analysis.

  2. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  3. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  4. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  5. Selection of Celebrity Endorsers

    DEFF Research Database (Denmark)

    Hollensen, Svend; Schimmelpfennig, Christian

    2013-01-01

    several candidates by means of subtle evaluation procedures. Design/methodology/approach – A case study research has been carried out among companies experienced in celebrity endorsements to learn more about the endorser selection process in practise. Based on these cases theory is inductively developed......Purpose - This research aims at shedding some light on the various avenues marketers can undertake until finally an endorsement contract is signed. The focus of the study lies on verifying the generally held assumption that endorser selection is usually taken care of by creative agencies, vetting....... Findings – Our research suggests that generally held assumption that endorsers being selected and thoroughly vetted by a creative agency may not be universally valid. A normative model to illustrate the continuum of the selection process in practise is suggested and the two polar case studies (Swiss brand...

  6. Materials selection in mechanical design

    International Nuclear Information System (INIS)

    Ashby, M.F.; Cebon, D.

    1993-01-01

    A novel materials-selection procedure has been developed and implemented in software. The procedure makes use of Materials Selection Charts: a new way of displaying material property data; and performance indices: combinations of material properties which govern performance. Optimisation methods are employed for simultaneous selection of both material and shape. (orig.)

  7. Materials selection in mechanical design

    OpenAIRE

    Ashby , M.; Cebon , D.

    1993-01-01

    A novel materials-selection procedure has been developed and implemented in software. The procedure makes use of Materials Selection Charts: a new way of displaying material property data; and performance indices: combinations of material properties which govern performance. Optimisation methods are employed for simultaneous selection of both material and shape.

  8. Radiographic implications of procedures involving cardiac implantable electronic devices (CIEDs – Selected aspects

    Directory of Open Access Journals (Sweden)

    Roman Steckiewicz

    2017-06-01

    Full Text Available Background: Some cardiac implantable electronic device (CIED implantation procedures require the use of X-rays, which is reflected by such parameters as total fluoroscopy time (TFT and dose-area product (DAP – defined as the absorbed dose multiplied by the area irradiated. Material and Methods: This retrospective study evaluated 522 CIED implantation (424 de novo and 98 device upgrade and new lead placement procedures in 176 women and 346 men (mean age 75±11 years over the period 2012–2015. The recorded procedure-related parameters TFT and DAP were evaluated in the subgroups specified below. The group of 424 de novo procedures included 203 pacemaker (PM and 171 implantable cardioverter-defibrillator (ICD implantation procedures, separately stratified by single-chamber and dual-chamber systems. Another subgroup of de novo procedures involved 50 cardiac resynchronization therapy (CRT devices. The evaluated parameters in the group of 98 upgrade procedures were compared between 2 subgroups: CRT only and combined PM and ICD implantation procedures. Results: We observed differences in TFT and DAP values between procedure types, with PM-related procedures showing the lowest, ICD – intermediate (with values for single-chamber considerably lower than those for dual-chamber systems and CRT implantation procedures – highest X-ray exposure. Upgrades to CRT were associated with 4 times higher TFT and DAP values in comparison to those during other upgrade procedures. Cardiac resynchronization therapy de novo implantation procedures and upgrades to CRT showed similar mean values of these evaluated parameters. Conclusions: Total fluoroscopy time and DAP values correlated progressively with CIED implantation procedure complexity, with CRT-related procedures showing the highest values of both parameters. Med Pr 2017;68(3:363–374

  9. Model Selection in Historical Research Using Approximate Bayesian Computation

    Science.gov (United States)

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  10. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  11. Procedural violation in the licensing procedure and possible legal consequences; Verfahrensmaengel im Konzessionierungsverfahren und etwaige Rechtsfolgen

    Energy Technology Data Exchange (ETDEWEB)

    Meyer-Hetling, Astrid; Probst, Matthias Ernst; Wolkenhauer, Soeren [Kanzlei Becker Buettner Held (BBH), Berlin (Germany)

    2012-07-15

    With respect to paragraph 46 sect. 2 to 4 EnWG (Energy Economy Law) communities are required to provide a publication procedure and competition procedure ('licensing procedure') for the new assignment of easement agreements for the establishment of local power supply systems and natural gas supply systems. The specific design of the selection process legally is regulated only rudimentary. Nevertheless old concessionaires increasingly deny the statutory grid transfer to the new concessionaires relying on supposed errors in the selection process. The unclear legal situation and the inconsistent, sometimes unreasonably strict jurisdiction and jurisprudence of antitrust as well as regulatory authorities resulted to a considerable legal certainty in communities and grid operators. Unless the legislature establishes the necessary legal clarity, the competent courts and authorities are invoked to act moderately in the examination of licensing procedures.

  12. Control Configuration Selection for Multivariable Descriptor Systems

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza; Stoustrup, Jakob

    2012-01-01

    Control configuration selection is the procedure of choosing the appropriate input and output pairs for the design of SISO (or block) controllers. This step is an important prerequisite for a successful industrial control strategy. In industrial practices it is often the case that the system, whi...... is that it can be used to propose a richer sparse or block diagonal controller structure. The interaction measure is used for control configuration selection of the linearized CSTR model with descriptor from....

  13. Selected Tether Applications Cost Model

    Science.gov (United States)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  14. Modeling the effect of selection history on pop-out visual search.

    Directory of Open Access Journals (Sweden)

    Yuan-Chi Tseng

    Full Text Available While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1 physical salience, (2 current goals and (3 selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP and the Distractor Preview Effect (DPE, two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task.

  15. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  16. Ultrasound in the diagnosis and treatment of developmental dysplasia of the hip. Evaluation of a selective screening procedure

    DEFF Research Database (Denmark)

    Strandberg, C.; Konradsen, L.A.; Ellitsgaard, N.

    2008-01-01

    INTRODUCTION: With the intention of reducing the treatment frequency of Developmental Dysplasia of the Hip (DDH), two hospitals in Copenhagen implemented a screening and treatment procedure based on selective referral to ultrasonography of the hip (US). This paper describes and evaluates...... 0.03%. No relationship was seen between morphological parameters at the first US and the outcome of hips classified as minor dysplastic or not fully developed (NFD). A statistically significant relationship was seen between the degree of dysplasia and the time until US normalization of the hips (p......= 0.02). There was no relapse of dysplasia after treatment. The median duration of treatment was six, eight and nine weeks for mild, moderate and severe dysplasia respectively. CONCLUSION: The procedure resulted in a low rate of treatment and a small number of late diagnosed cases. Prediction...

  17. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  18. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon; Maadooliat, Mehdi; Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2015-01-01

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  19. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  20. Selective Cooperation in Early Childhood - How to Choose Models and Partners.

    Directory of Open Access Journals (Sweden)

    Jonas Hermes

    Full Text Available Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models on a rational basis. The present study investigated whether children (and adults also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64 and adults (N = 14 saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects. Participants then performed different tasks (cooperative problem solving and word learning requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks, but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks. Young children's selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks.

  1. 47 CFR 1.1602 - Designation for random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  2. Behavioural Procedural Models – a multipurpose mechanistic account

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2012-05-01

    Full Text Available In this paper we outline an epistemological defence of what wecall Behavioural Procedural Models (BPMs, which represent the processes of individual decisions that lead to relevant economic patterns as psychologically (rather than rationally driven. Their general structure, and the way in which they may be incorporated to a multipurpose view of models, where the representational and interventionist goals are combined, is shown. It is argued that BPMs may provide “mechanistic-based explanations” in the sense defended by Hedström and Ylikoski (2010, which involve invariant regularities in Woodward’s sense. Such mechanisms provide a causal sort of explanation of anomalous economic patterns, which allow for extra marketintervention and manipulability in order to correct and improve some key individual decisions. This capability sets the basis for the so called libertarian paternalism (Sunstein and Thaler 2003.

  3. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  4. SURVEY OF SELECTED PROCEDURES FOR THE INDIRECT DETERMINATION OF THE GROUP REFRACTIVE INDEX OF AIR

    Directory of Open Access Journals (Sweden)

    Filip Dvořáček

    2018-02-01

    Full Text Available The main aim of the research was to evaluate numeric procedures of the indirect determination of the group refractive index of air and to choose the suitable ones for requirements of ordinary and high accuracy distance measurement in geodesy and length metrology. For this purpose, 10 existing computation methods were derived from various authors’ original publications and all were analysed for wide intervals of wavelengths and atmospheric parameters. The determination of the phase and the group refractive indices are essential parts in the evaluation of the first velocity corrections of laser interferometers and electronic distance meters. The validity of modern procedures was tested with respect to updated CIPM-2007 equations of the density of air. The refraction model of Leica AT401 laser tracker was analysed.

  5. 47 CFR 1.1603 - Conduct of random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  6. Ion-selective electrode reviews

    CERN Document Server

    Thomas, J D R

    1982-01-01

    Ion-Selective Electrode Reviews, Volume 3, provides a review of articles on ion-selective electrodes (ISEs). The volume begins with an article on methods based on titration procedures for surfactant analysis, which have been developed for discrete batch operation and for continuous AutoAnalyser use. Separate chapters deal with detection limits of ion-selective electrodes; the possibility of using inorganic ion-exchange materials as ion-sensors; and the effect of solvent on potentials of cells with ion-selective electrodes. Also included is a chapter on advances in calibration procedures, the d

  7. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  8. Pan endoscopic approach "hysterolaparoscopy" as an initial procedure in selected infertile women.

    Science.gov (United States)

    Vaid, Keya; Mehra, Sheila; Verma, Mita; Jain, Sandhya; Sharma, Abha; Bhaskaran, Sruti

    2014-02-01

    normal uterine cavity. When these 112 women (58.03%) with normal HSG report were further subjected to hysterolaparoscopy, only 35/193 (18.13%) of them actually had normal tubes and uterus; rest 77 women (39.89%) were benefited by one step procedure of hysterolaparoscopic evaluation and intervention and further treatment done. Hysterolaparoscopy (Pan Endoscopic) approach is better than HSG and should be encouraged as first and final procedure in selected infertile women.

  9. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  10. Qualitative mechanism models and the rationalization of procedures

    Science.gov (United States)

    Farley, Arthur M.

    1989-01-01

    A qualitative, cluster-based approach to the representation of hydraulic systems is described and its potential for generating and explaining procedures is demonstrated. Many ideas are formalized and implemented as part of an interactive, computer-based system. The system allows for designing, displaying, and reasoning about hydraulic systems. The interactive system has an interface consisting of three windows: a design/control window, a cluster window, and a diagnosis/plan window. A qualitative mechanism model for the ORS (Orbital Refueling System) is presented to coordinate with ongoing research on this system being conducted at NASA Ames Research Center.

  11. Evaluation of selection procedures of an international school | O ...

    African Journals Online (AJOL)

    Consequently the current admission procedures used by a southern African international school were ... The Culture-Fair Intelligence Test (Scale 2 Form A) appeared to have more predictive value than the MAT-SF for academic achievement.

  12. 5 CFR 720.206 - Selection guidelines.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Selection guidelines. 720.206 Section 720... guidelines. This subpart sets forth requirements for a recruitment program, not a selection program... procedures and criteria must be consistent with the Uniform Guidelines on Employee Selection Procedures (43...

  13. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  14. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  15. Rule-Governed Imitative Verbal Behavior as a Function of Modeling Procedures

    Science.gov (United States)

    Clinton, LeRoy; Boyce, Kathleen D.

    1975-01-01

    Investigated the effectiveness of modeling procedures alone and complemented by the appropriate rule statement on the production of plurals. Subjects were 20 normal and 20 retarded children who were randomly assigned to one of two learning conditions and who received either affective or informative social reinforcement. (Author/SDH)

  16. Designing Organizational Effectiveness Model of Selected Iraq’s Sporting Federations Based on Competing Values Framework

    Directory of Open Access Journals (Sweden)

    Hossein Eydi

    2013-01-01

    Full Text Available The aim of the present study was designing effectiveness model of selected Iraq sport federations based on competing values framework. Statistical society of present study included 221 subjects ranging from chairmen, expert staffs, national adolescent athletes, and national referees. 180 subjects (81.4 percent answered standard questionnaire of Eydi et al (2011 with five Likert values scale. Content and face validity of this tool was confirmed by 12 academic professors and its reliability was validated by Cronbach's alpha (r = 0.97. Results of Structural Equation Model (SEM based on path analysis method showed that factors of expert human resources(0.88, organizational interaction (0.88, productivity (0.87, employees' cohesion (0.84, planning (0.84, organizational stability (0.81, flexibility (0.78, and organizational resources (0.74 had the most effects on organizational effectiveness.Also, findings of factor analysis showed that patterns of internal procedures and rational goals were main patterns of competing values framework and determinants of organizational effectiveness in Iraq's selected sport federations. Moreover, federations of football, track and field, weightlifting, and basketball had the highest mean of organizational effectiveness, respectively. Hence, Iraq sport federations mainly focused on organizational control, and internal attention as index of OE.

  17. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  18. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  19. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  20. A rational procedure for the selection of appropriate procurement ...

    African Journals Online (AJOL)

    Construction work is procured via a number of systems, such as being Open and ... routes for construction projects as a means to enhance the quality of management ... systems to decision-making procedures in the construction industry.

  1. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  2. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  3. Information Overload in Multi-Stage Selection Procedures

    NARCIS (Netherlands)

    S.S. Ficco (Stefano); V.A. Karamychev (Vladimir)

    2004-01-01

    textabstractThe paper studies information processing imperfections in a fully rational decision-making network. It is shown that imperfect information transmission and imperfect information acquisition in a multi-stage selection game yield information overload. The paper analyses the mechanisms

  4. Fisher-Wright model with deterministic seed bank and selection.

    Science.gov (United States)

    Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel

    2017-04-01

    Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  6. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  7. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  8. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    Science.gov (United States)

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  9. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  10. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    Science.gov (United States)

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  11. Modeling selective pressures on phytoplankton in the global ocean.

    Directory of Open Access Journals (Sweden)

    Jason G Bragg

    Full Text Available Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying

  12. Modeling selective pressures on phytoplankton in the global ocean.

    Science.gov (United States)

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-03-10

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  13. Modeling Directional Selectivity Using Self-Organizing Delay-Aadaptation Maps

    OpenAIRE

    Tversky, Mr. Tal; Miikkulainen, Dr. Risto

    2002-01-01

    Using a delay adaptation learning rule, we model the activity-dependent development of directionally selective cells in the primary visual cortex. Based on input stimuli, a learning rule shifts delays to create synchronous arrival of spikes at cortical cells. As a result, delays become tuned creating a smooth cortical map of direction selectivity. This result demonstrates how delay adaption can serve as a powerful abstraction for modeling temporal learning in the brain.

  14. Robotic vascular resections during Whipple procedure

    OpenAIRE

    Allan, Bassan J.; Novak, Stephanie M.; Hogg, Melissa E.; Zeh, Herbert J.

    2018-01-01

    Indications for resection of pancreatic cancers have evolved to include selected patients with involvement of peri-pancreatic vascular structures. Open Whipple procedures have been the standard approach for patients requiring reconstruction of the portal vein (PV) or superior mesenteric vein (SMV). Recently, high-volume centers are performing minimally invasive Whipple procedures with portovenous resections. Our institution has performed seventy robotic Whipple procedures with concomitant vas...

  15. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    Science.gov (United States)

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  16. Experiences with a procedure for modeling product knowledge

    DEFF Research Database (Denmark)

    Hansen, Benjamin Loer; Hvam, Lars

    2002-01-01

    This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....

  17. Heuristic procedures for transmission planning in competitive electricity markets

    International Nuclear Information System (INIS)

    Lu, Wene; Bompard, Ettore; Napoli, Roberto; Jiang, Xiuchen

    2007-01-01

    The network structure of the power system, in an electricity market under the pool model, may have severe impacts on market performance, reducing market efficiency considerably, especially when producers bid strategically. In this context network re-enforcement plays a major role and proper strategies of transmission planning need to be devised. This paper presents, for pool-model electricity markets, two heuristic procedures to select the most effective subset of lines that would reduce the impacts on the market, from a set of predefined candidate lines and within the allowed budget for network expansion. A set of indices that account for the economic impacts of the re-enforcing of the candidate lines, both in terms of construction cost and market efficiency, are proposed and used as sensitivity indices in the heuristic procedure. The proposed methods are applied and compared with reference to an 18-bus test system. (author)

  18. A Generic Procedure for the Assessment of the Effect of Concrete Admixtures on the Sorption of Radionuclides on Cement: Concept and Selected Results

    International Nuclear Information System (INIS)

    Glaus, M.A.; Laube, A.; Van Loon, L.R.

    2004-01-01

    A screening procedure is proposed for the assessment of the effect of concrete admixtures on the sorption of radionuclides by cement. The procedure is both broad and generic, and can thus be used as input for the assessment of concrete admixtures which might be used in the future. The experimental feasibility and significance of the screening procedure are tested using selected concrete admixtures: i.e. sulfonated naphthalene-formaldehyde condensates, lignosulfonates, and a plasticiser used at PSI for waste conditioning. The effect of these on the sorption properties of Ni(II), Eu(III) and Th(IV) in cement is investigated using crushed Hardened Cement Paste (HCP), as well as cement pastes prepared in the presence of these admixtures. Strongly adverse effects on the sorption of the radionuclides tested are observed only in single cases, and under extreme conditions: i.e. at high ratios of concrete admixtures to HCP, and at low ratios of HCP to cement pore water. Under realistic conditions, both radionuclide sorption and the sorption of isosaccharinic acid (a strong complexant produced in cement-conditioned wastes containing cellulose) remain unaffected by the presence of concrete admixtures, which can be explained by the sorption of them onto the HCP. The pore-water concentrations of the concrete admixtures tested are thereby reduced to levels at which the formation of radionuclide complexes is no longer of importance. Further, the Langmuir sorption model, proposed for the sorption of concrete admixtures on HCP, suggests that the HCP surface does not become saturated, at least for those concrete admixtures tested. (author)

  19. A Generic Procedure for the Assessment of the Effect of Concrete Admixtures on the Sorption of Radionuclides on Cement: Concept and Selected Results

    Energy Technology Data Exchange (ETDEWEB)

    Glaus, M.A.; Laube, A.; Van Loon, L.R

    2004-03-01

    A screening procedure is proposed for the assessment of the effect of concrete admixtures on the sorption of radionuclides by cement. The procedure is both broad and generic, and can thus be used as input for the assessment of concrete admixtures which might be used in the future. The experimental feasibility and significance of the screening procedure are tested using selected concrete admixtures: i.e. sulfonated naphthalene-formaldehyde condensates, lignosulfonates, and a plasticiser used at PSI for waste conditioning. The effect of these on the sorption properties of Ni(II), Eu(III) and Th(IV) in cement is investigated using crushed Hardened Cement Paste (HCP), as well as cement pastes prepared in the presence of these admixtures. Strongly adverse effects on the sorption of the radionuclides tested are observed only in single cases, and under extreme conditions: i.e. at high ratios of concrete admixtures to HCP, and at low ratios of HCP to cement pore water. Under realistic conditions, both radionuclide sorption and the sorption of isosaccharinic acid (a strong complexant produced in cement-conditioned wastes containing cellulose) remain unaffected by the presence of concrete admixtures, which can be explained by the sorption of them onto the HCP. The pore-water concentrations of the concrete admixtures tested are thereby reduced to levels at which the formation of radionuclide complexes is no longer of importance. Further, the Langmuir sorption model, proposed for the sorption of concrete admixtures on HCP, suggests that the HCP surface does not become saturated, at least for those concrete admixtures tested. (author)

  20. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  1. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  2. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  3. ERP Software Selection Model using Analytic Network Process

    OpenAIRE

    Lesmana , Andre Surya; Astanti, Ririn Diar; Ai, The Jin

    2014-01-01

    During the implementation of Enterprise Resource Planning (ERP) in any company, one of the most important issues is the selection of ERP software that can satisfy the needs and objectives of the company. This issue is crucial since it may affect the duration of ERP implementation and the costs incurred for the ERP implementation. This research tries to construct a model of the selection of ERP software that are beneficial to the company in order to carry out the selection of the right ERP sof...

  4. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  5. Gorleben. Waste management site based on an appropriate selection procedure

    International Nuclear Information System (INIS)

    Tiggemann, Anselm

    2010-01-01

    On February 22, 1977, the Lower Saxony state government decided in favor of Gorleben as a ''preliminary'' site of a ''potential'' facility for managing the back end of the fuel cycle of the nuclear power plants in the Federal Republic of Germany. The Lower Saxony files, closed until recently, now allow both the factual basis and the political background to be reconstructed comprehensively. The first selection procedure, financed by the federal government, for the site of a ''nuclear waste management center,'' which had been conducted by Kernbrennstoff-Wiederaufarbeitungsgesellschaft (KEWA) in 1974, had not considered Gorleben in any detail. As early as in the winter of 1975/76, Gorleben and a number of other potential sites were indicated to KEWA by the Lower Saxony State Ministry of Economics. The new finding is KEWA's conclusion of 1976 that Gorleben surpassed all potential sites examined so far in terms of suitability. As a consequence, Gorleben was regarded as an alternative alongside the 3 sites favored before, i.e. Wahn, Lutterloh, and Lichtenhorst, when the 3 Federal Ministers, Hans Matthoefer (SPD), Werner Maihofer (F.D.P.), and Hans Friderichs (F.D.P.), discussed the nuclear waste management project with Minister President Albrecht (CDU) in November 1976. The Lower Saxony State Cabinet commissioned an interministerial working party (IMAK) to find other potential sites besides Wahn, Lutterloh, Lichtenhorst, and Gorleben. IMAK proposed Gorleben, Lichtenhorst, Mariaglueck, and Wahn for further examination. IMAK recommended to the State Cabinet in another proposal to earmark either Gorleben or Lichtenhorst. (orig.)

  6. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  7. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  8. Methodological development for selection of significant predictors explaining fatal road accidents.

    Science.gov (United States)

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  9. Modeling of Clostridium tyrobutyricum for Butyric Acid Selectivity in Continuous Fermentation

    Directory of Open Access Journals (Sweden)

    Jianjun Du

    2014-04-01

    Full Text Available A mathematical model was developed to describe batch and continuous fermentation of glucose to organic acids with Clostridium tyrobutyricum. A modified Monod equation was used to describe cell growth, and a Luedeking-Piret equation was used to describe the production of butyric and acetic acids. Using the batch fermentation equations, models predicting butyric acid selectivity for continuous fermentation were also developed. The model showed that butyric acid production was a strong function of cell mass, while acetic acid production was a function of cell growth rate. Further, it was found that at high acetic acid concentrations, acetic acid was metabolized to butyric acid and that this conversion could be modeled. In batch fermentation, high butyric acid selectivity occurred at high initial cell or glucose concentrations. In continuous fermentation, decreased dilution rate improved selectivity; at a dilution rate of 0.028 h−1, the selectivity reached 95.8%. The model and experimental data showed that at total cell recycle, the butyric acid selectivity could reach 97.3%. This model could be used to optimize butyric acid production using C. tyrobutyricum in a continuous fermentation scheme. This is the first study that mathematically describes batch, steady state, and dynamic behavior of C. tyrobutyricum for butyric acid production.

  10. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  11. Does attentional selectivity in the flanker task improve discretely or gradually?

    Directory of Open Access Journals (Sweden)

    Ronald eHübner

    2012-10-01

    Full Text Available An important question is whether attentional selectivity improves discretely or continuously during stimulus processing. In a recent study, Hübner, et al. (2010 found that the discrete DSTP model accounted better for flanker-task data than various continuous improvement models. However, in a subsequent study, White, et al. (2011 introduced the continuous SSP model and showed that it was superior to the DSTP model. From this result they concluded that attentional selectivity improves continuously rather than discretely. Because different stimuli and procedures were used in these two studies, though, we questioned that the superiority of the SSP model holds generally. Therefore, we fit the SSP model to Hübner et al.’s data and found that the DSTP model was again superior. A series of four experiments revealed that model superiority depends on the response-stimulus interval (RSI. Together, our results demonstrate that methodological details can be crucial for model selection, and that further comparisons between the models are needed before it can be decided whether attentional selectivity improves continuously or discretely.

  12. Comparison of automatic procedures in the selection of peaks over threshold in flood frequency analysis: A Canadian case study in the context of climate change

    Science.gov (United States)

    Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.

    2017-12-01

    Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.

  13. Mean field theory for a balanced hypercolumn model of orientation selectivity in primary visual cortex

    CERN Document Server

    Lerchner, A; Hertz, J; Ahmadi, M

    2004-01-01

    We present a complete mean field theory for a balanced state of a simple model of an orientation hypercolumn. The theory is complemented by a description of a numerical procedure for solving the mean-field equations quantitatively. With our treatment, we can determine self-consistently both the firing rates and the firing correlations, without being restricted to specific neuron models. Here, we solve the analytically derived mean-field equations numerically for integrate-and-fire neurons. Several known key properties of orientation selective cortical neurons emerge naturally from the description: Irregular firing with statistics close to -- but not restricted to -- Poisson statistics; an almost linear gain function (firing frequency as a function of stimulus contrast) of the neurons within the network; and a contrast-invariant tuning width of the neuronal firing. We find that the irregularity in firing depends sensitively on synaptic strengths. If Fano factors are bigger than 1, then they are so for all stim...

  14. Robotic vascular resections during Whipple procedure.

    Science.gov (United States)

    Allan, Bassan J; Novak, Stephanie M; Hogg, Melissa E; Zeh, Herbert J

    2018-01-01

    Indications for resection of pancreatic cancers have evolved to include selected patients with involvement of peri-pancreatic vascular structures. Open Whipple procedures have been the standard approach for patients requiring reconstruction of the portal vein (PV) or superior mesenteric vein (SMV). Recently, high-volume centers are performing minimally invasive Whipple procedures with portovenous resections. Our institution has performed seventy robotic Whipple procedures with concomitant vascular resections. This report outlines our technique.

  15. Automating an integrated spatial data-mining model for landfill site selection

    Science.gov (United States)

    Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Aziz, Hamidi Abdul

    2017-10-01

    An integrated programming environment represents a robust approach to building a valid model for landfill site selection. One of the main challenges in the integrated model is the complicated processing and modelling due to the programming stages and several limitations. An automation process helps avoid the limitations and improve the interoperability between integrated programming environments. This work targets the automation of a spatial data-mining model for landfill site selection by integrating between spatial programming environment (Python-ArcGIS) and non-spatial environment (MATLAB). The model was constructed using neural networks and is divided into nine stages distributed between Matlab and Python-ArcGIS. A case study was taken from the north part of Peninsular Malaysia. 22 criteria were selected to utilise as input data and to build the training and testing datasets. The outcomes show a high-performance accuracy percentage of 98.2% in the testing dataset using 10-fold cross validation. The automated spatial data mining model provides a solid platform for decision makers to performing landfill site selection and planning operations on a regional scale.

  16. Dynamical modeling procedure of a Li-ion battery pack suitable for real-time applications

    International Nuclear Information System (INIS)

    Castano, S.; Gauchia, L.; Voncila, E.; Sanz, J.

    2015-01-01

    Highlights: • Dynamical modeling of a 50 A h battery pack composed of 56 cells. • Detailed analysis of SOC tests at realistic performance range imposed by BMS. • We propose an electrical circuit that improves how the battery capacity is modeled. • The model is validated in the SOC range using a real-time experimental setup. - Abstract: This paper presents the modeling of a 50 A h battery pack composed of 56 cells, taking into account real battery performance conditions imposed by the BMS control. The modeling procedure starts with a detailed analysis of experimental charge and discharge SOC tests. Results from these tests are used to obtain the battery model parameters at a realistic performance range (20–80% SOC). The model topology aims to better describe the finite charge contained in a battery pack. The model has been validated at three different SOC values in order to verify the model response at real battery pack operation conditions. The validation tests show that the battery pack model is able to simulate the real battery response with excellent accuracy in the range tested. The proposed modeling procedure is fully applicable to any Li-ion battery pack, regardless of the number of series or parallel cells or its rated capacity

  17. Joint Variable Selection and Classification with Immunohistochemical Data

    Directory of Open Access Journals (Sweden)

    Debashis Ghosh

    2009-01-01

    Full Text Available To determine if candidate cancer biomarkers have utility in a clinical setting, validation using immunohistochemical methods is typically done. Most analyses of such data have not incorporated the multivariate nature of the staining profiles. In this article, we consider modelling such data using recently developed ideas from the machine learning community. In particular, we consider the joint goals of feature selection and classification. We develop estimation procedures for the analysis of immunohistochemical profiles using the least absolute selection and shrinkage operator. These lead to novel and flexible models and algorithms for the analysis of compositional data. The techniques are illustrated using data from a cancer biomarker study.

  18. A Conceptual Framework for Procurement Decision Making Model to Optimize Supplier Selection: The Case of Malaysian Construction Industry

    Science.gov (United States)

    Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.

  19. Multilevel selection in a resource-based model

    Science.gov (United States)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  20. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  1. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  2. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  3. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  4. Model Building – A Circular Approach to Evaluate Multidimensional Patterns and Operationalized Procedures

    Directory of Open Access Journals (Sweden)

    Franz HAAS

    2017-12-01

    Full Text Available Managers operate in highly different fields. Decision-making can be based on models reflecting in part these differences. The challenge is to connect the respective models without too great a disruption. A threefold procedural approach is proposed by chaining a scheme of modeling in a complex field to an operationalized model to statistical multivariate methods. Multivariate pattern-detecting methods offer the chance to evaluate patterns within the complex field partly. This step completes the cycle of research and improved models can be used in a further cycle.

  5. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Nonmathematical models for evolution of altruism, and for group selection (peck order-territoriality-ant colony-dual-determinant model-tri-determinant model).

    Science.gov (United States)

    Darlington, P J

    1972-02-01

    Mathematical biologists have failed to produce a satisfactory general model for evolution of altruism, i.e., of behaviors by which "altruists" benefit other individuals but not themselves; kin selection does not seem to be a sufficient explanation of nonreciprocal altruism. Nonmathematical (but mathematically acceptable) models are now proposed for evolution of negative altruism in dual-determinant and of positive altruism in tri-determinant systems. Peck orders, territorial systems, and an ant society are analyzed as examples. In all models, evolution is primarily by individual selection, probably supplemented by group selection. Group selection is differential extinction of populations. It can act only on populations preformed by selection at the individual level, but can either cancel individual selective trends (effecting evolutionary homeostasis) or supplement them; its supplementary effect is probably increasingly important in the evolution of increasingly organized populations.

  7. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  8. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  9. Unexpected effects of computer presented procedures

    International Nuclear Information System (INIS)

    Blackman, H.S.; Nelson, W.R.

    1988-01-01

    Results from experiments conducted at the Idaho National Engineering Laboratory have been presented regarding the computer presentation of procedural information. The results come from the experimental evaluation of an expert system which presented procedural instructions to be performed by a nuclear power plant operator. Lessons learned and implications from the study are discussed as well as design issues that should be considered to avoid some of the pitfalls in computer presented or selected procedures

  10. Fixation probability in a two-locus intersexual selection model.

    Science.gov (United States)

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  12. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  13. ILK statement on the recommendations by the working group on procedures for the selection of repository sites; ILK-Stellungnahme zu den Empfehlungen des Arbeitskreises Auswahlverfahren Endlagerstandorte Internationale (AkEnd)

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2003-11-01

    The Working Group on Procedures for the Selection of Repository Sites (AkEnd) had been appointed by the German Federal Ministry for the Environment (BMU) to develop procedures and criteria for the search for, and selection of, a repository site for all kinds of radioactive waste in deep geologic formations in Germany. ILK in principle welcomes the attempt on the part of AkEnd to develop a systematic procedure. On the other hand, ILK considers the two constraints imposed by BMU inappropriate: AkEnd was not to take into account the two existing sites of Konrad and Gorleben and, instead, work from a so-called white map of Germany. ILK recommends to perform a comprehensive safety analysis of Gorleben and define a selection procedure including the facts about Gorleben and, in addition, to commission the Konrad repository as soon as possible. The one-repository concept established as a precondition by BMU greatly restricts the selection procedure. There are no technical or scientific reasons for such concept. ILK recommends to plan for separate repositories, which would also correspond to international practice. The geoscientific criteria proposed by AkEnd should be examined and revised. With respect to the site selection procedure proposed, ILK feels that procedure is unable to define a targeted approach. Great importance must be attributed to public participation. The final site selection must be made under the responsibility of the government or the parliament. (orig.) [German] Der Arbeitskreis Auswahlverfahren Endlagerstandorte (AkEnd) hat Ende 2002 seine Empfehlungen vorgestellt. Der AkEnd war vom Bundesumweltministerium (BMU) berufen worden, um Verfahren und Kriterien fuer die Suche und die Auswahl eines Endlagerstandortes fuer alle Arten radioaktiver Abfaelle in tiefen geologischen Formationen in Deutschland zu entwickeln. Die ILK begruesst grundsaetzlich den Versuch des AkEnd, ein systematisches Verfahren zu entwickeln. Allerdings haelt die ILK die beiden vom BMU

  14. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  15. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  16. The photon identification loophole in EPRB experiments: computer models with single-wing selection

    Directory of Open Access Journals (Sweden)

    De Raedt Hans

    2017-11-01

    Full Text Available Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015; L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other “post-selection” is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell’s theorem which states that this is impossible. The failure of Bell’s theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.

  17. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  19. Lumping procedure for a kinetic model of catalytic naphtha reforming

    Directory of Open Access Journals (Sweden)

    H. M. Arani

    2009-12-01

    Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.

  20. 28 CFR 104.31 - Procedure for claims evaluation.

    Science.gov (United States)

    2010-07-01

    ... COMPENSATION FUND OF 2001 Claim Intake, Assistance, and Review Procedures § 104.31 Procedure for claims..., described herein as “Track A” and “Track B,” selected by the claimant on the Personal Injury Compensation Form or Death Compensation Form. (1) Procedure for Track A. The Claims Evaluator shall determine...

  1. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  2. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  3. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  4. Unexpected effects of computer presented procedures

    International Nuclear Information System (INIS)

    Blackman, H.S.; Nelson, W.R.

    1988-01-01

    Results from experiments conducted at the Idaho National Engineering Laboratory will be presented regarding the computer presentation of procedural information. The results come from the experimental evaluation of an expert system which presented procedural instructions to be performed by a nuclear power plant operator. Lessons learned and implications from the study will be discussed as well as design issues that should be considered to avoid some of the pitfalls in computer presented or selected procedures. 1 ref., 1 fig

  5. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  6. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  7. Fermentation process tracking through enhanced spectral calibration modeling.

    Science.gov (United States)

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  8. A new procedure for implementing a geological disposal

    International Nuclear Information System (INIS)

    Anon.

    2014-01-01

    The British government has launched a new procedure for selecting and implementing a geological disposal. This procedure is based on long-term cooperation with municipalities that wish to home this facility. In a preliminary 2 year long step, a national geological survey will be performed in order to determine regions that are suitable to home a geological disposal. Then discussions between municipalities that are voluntary and the enterprise in charge of developing the project will begin. Municipalities will receive an investment up to 1 million pounds a year in the first years of the selecting procedure and then 2.5 million pounds a year when discussions become more formal. British authorities consider that the procedure for selecting a site may last up to 20 years. A previous attempt to find a site failed in 2013 when 2 regions that had been interested in the project since 2008, were finally rebuffed by the regional council that opposed the project. Scotland and Wales have their own strategy for the management of radioactive waste. (A.C.)

  9. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  10. A Common Capacity Limitation for Response and Item Selection in Working Memory

    Science.gov (United States)

    Janczyk, Markus

    2017-01-01

    Successful completion of any cognitive task requires selecting a particular action and the object the action is applied to. Oberauer (2009) suggested a working memory (WM) model comprising a declarative and a procedural part with analogous structures. One important assumption of this model is that both parts work independently of each other, and…

  11. An evaluation of in vivo desensitization and video modeling to increase compliance with dental procedures in persons with mental retardation.

    Science.gov (United States)

    Conyers, Carole; Miltenberger, Raymond G; Peterson, Blake; Gubin, Amber; Jurgens, Mandy; Selders, Andrew; Dickinson, Jessica; Barenz, Rebecca

    2004-01-01

    Fear of dental procedures deters many individuals with mental retardation from accepting dental treatment. This study was conducted to assess the effectiveness of two procedures, in vivo desensitization and video modeling, for increasing compliance with dental procedures in participants with severe or profound mental retardation. Desensitization increased compliance for all 5 participants, whereas video modeling increased compliance for only 1 of 3 participants.

  12. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J E [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs.

  13. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs

  14. Whipple procedure: patient selection and special considerations

    Directory of Open Access Journals (Sweden)

    Tan-Tam C

    2016-07-01

    Full Text Available Clara Tan-Tam,1 Maja Segedi,2 Stephen W Chung2 1Department of Surgery, Bassett Healthcare, Columbia University, Cooperstown, New York, NY, USA; 2Department of Hepatobiliary and Pancreatic Surgery and Liver Transplant, Vancouver General Hospital, University of British Columbia, Vancouver, BC, Canada Abstract: At the inception of pancreatic surgery by Dr Whipple in 1930s, the mortality and morbidity risk was more than 20%. With further understanding of disease processes and improvements in pancreas resection techniques, the mortality risk has decreased to less than 5%. Age and chronic illnesses are no longer a contraindication to surgical treatment. Life expectancy and quality of life at a later age have improved, making older patients more likely to receive pancreatic surgery , thereby also putting emphasis on operative patient selection to minimize complications. This review summarizes the benign and malignant illnesses that are treated with pancreas operations, and innovations and improvements in pancreatic surgery and perioperative care, and describes the careful selection process for patients who would benefit from an operation. These indications are not reserved only to Whipple operation, but to pancreatectomies as well.Keywords: pancreaticoduodenectomy, mortality, morbidity, cancer, trauma, pancreatitis

  15. Ant colony optimization as a descriptor selection in QSPR modeling: Estimation of the λmax of anthraquinones-based dyes

    Directory of Open Access Journals (Sweden)

    Morteza Atabati

    2016-09-01

    Full Text Available Quantitative structure–property relationship (QSPR studies based on ant colony optimization (ACO were carried out for the prediction of λmax of 9,10-anthraquinone derivatives. ACO is a meta-heuristic algorithm, which is derived from the observation of real ants and proposed to feature selection. After optimization of 3D geometry of structures by the semi-empirical quantum-chemical calculation at AM1 level, different descriptors were calculated by the HyperChem and Dragon softwares (1514 descriptors. A major problem of QSPR is the high dimensionality of the descriptor space; therefore, descriptor selection is the most important step. In this paper, an ACO algorithm was used to select the best descriptors. Then selected descriptors were applied for model development using multiple linear regression. The average absolute relative deviation and correlation coefficient for the calibration set were obtained as 3.3% and 0.9591, respectively, while the average absolute relative deviation and correlation coefficient for the prediction set were obtained as 5.0% and 0.9526, respectively. The results showed that the applied procedure is suitable for prediction of λmax of 9,10-anthraquinone derivatives.

  16. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  17. The selection pressures induced non-smooth infectious disease model and bifurcation analysis

    International Nuclear Information System (INIS)

    Qin, Wenjie; Tang, Sanyi

    2014-01-01

    Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation

  18. A new approach to the LILW repository site selection

    International Nuclear Information System (INIS)

    Mele, I.; Zeleznik, N.

    1998-01-01

    After the failure of site selection, which was performed between 1990-1993, the Agency for Radwaste Management was urged to start a new site selection process for low and intermediate level waste (LILW). Since this is the most sensitive and delicate phase of the whole disposal project extensive analyses of foreign and domestic experiences in siting were performed. Three different models were studied and discussed at a workshop on preparation of the siting procedure for LILW repository. The participants invited to the workshop supported the combined approach, to the site selection, which is presented in this paper.(author)

  19. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  20. Infant speech-sound discrimination testing: effects of stimulus intensity and procedural model on measures of performance.

    Science.gov (United States)

    Nozza, R J

    1987-06-01

    Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.

  1. Uncertain programming models for portfolio selection with uncertain returns

    Science.gov (United States)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  2. A comparative assessment of alternative waste management procedures for selected reprocessing wastes

    International Nuclear Information System (INIS)

    Hickford, G.E.; Plews, M.J.

    1983-07-01

    This report, which has been prepared by Associated Nuclear Services for the Department of the Environment, presents the results of a study and comparative assessment of management procedures for low and intermediate level solid waste streams arising from current and future fuel reprocessing operations on the Sellafield site. The characteristics and origins of the wastes under study are discussed and a reference waste inventory is presented, based on published information. Waste management strategy in the UK and its implications for waste conditioning, packaging and disposal are discussed. Wastes currently arising which are not suitable for Drigg burial or sea dumping are stored in an untreated form. Work is in hand to provide additional and improved disposal facilities which will accommodate all the waste streams under study. For each waste stream viable procedures are identified for further assessment. The procedures comprise a series of on-site operations-recovery from storage, pre-treatment, treatment, encapsulation, and packaging, prior to storage or disposal of the conditioned waste form. Assessments and comparisons of each procedure for each waste are presented. These address various process, operational, economic, radiological and general safety factors. The results are presented in a series of tables with supporting text. For the majority of wastes direct encapsulation with minimal treatment appears to be a viable procedure. Occupational exposure and general safety are not identified as significant factors governing the choice of procedures. The conditioned wastes meet the general requirements for safe handling during storage and transportation. The less active wastes suitable for disposal by currently available routes meet the appropriate disposal criteria. It is not possible to consider in detail the suitability for disposal of the more active wastes for which disposal facilities are not yet available. (Author)

  3. 7 CFR 983.152 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework procedure for aflatoxin. If inshell rework is selected as a remedy to meet the aflatoxin regulations of this...

  4. Selection of antibiotics in detection procedure of Escherichia coli O157:H7 in vegetables

    Science.gov (United States)

    Hoang, Hoang A.; Nhung, Nguyen T. T.

    2017-09-01

    Detection of Escherichia coli O157:H7 in ready-to-eat fresh vegetables is important since this bacteria is considered as one of the most important pathogens in relation to public health. However, it could be a big challenge for detection of initial low concentrations of E. coli O157:H7 in the samples. In this study, selection of antibiotics that suppress growth of background bacteria to enable detection of E. coli O157:H7 in ready-to-eat fresh vegetables was investigated. Firstly, different combinations of two antibiotics, i.e. novobiocin (N) and vancomycin (V), in BHI broth were conducted. The three antibiotic combinations were preliminary examined their effect on the growth of E. coli O157:H7 and Bacillus spp. in broth based on OD600nm measurement. The combination of both the antibiotics was selected to examine their possibility to support detection of E. coli O157:H7 in vegetables. It was successful when two antibiotics showed their support in detection of E. coli O157:H7 at very low concentration of 2 CFU per one gram of lettuce. Usage of these antibiotics is simple and cheap in the detection procedure and could be applied to other types of ready-to-eat fresh vegetables popular in Vietnam.

  5. Optimal Procedure for siting of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub

    2013-01-01

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site

  6. Optimal Procedure for siting of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site.

  7. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  8. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  9. Non-additive Effects in Genomic Selection

    Directory of Open Access Journals (Sweden)

    Luis Varona

    2018-03-01

    Full Text Available In the last decade, genomic selection has become a standard in the genetic evaluation of livestock populations. However, most procedures for the implementation of genomic selection only consider the additive effects associated with SNP (Single Nucleotide Polymorphism markers used to calculate the prediction of the breeding values of candidates for selection. Nevertheless, the availability of estimates of non-additive effects is of interest because: (i they contribute to an increase in the accuracy of the prediction of breeding values and the genetic response; (ii they allow the definition of mate allocation procedures between candidates for selection; and (iii they can be used to enhance non-additive genetic variation through the definition of appropriate crossbreeding or purebred breeding schemes. This study presents a review of methods for the incorporation of non-additive genetic effects into genomic selection procedures and their potential applications in the prediction of future performance, mate allocation, crossbreeding, and purebred selection. The work concludes with a brief outline of some ideas for future lines of that may help the standard inclusion of non-additive effects in genomic selection.

  10. Procedures for selecting and buying district heating equipment. Sofia district heating. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The aim of this Final Report, prepared for the project `Procedures for Selecting and Buying DistRict Heating Equipment - Sofia District Heating Company`, is to establish an overview of the activities accomplished, the outputs delivered and the general experience gained as a result of the project. The main objective of the project is to enable Sofia District Heating Company to prepare specifications and tender documents, identify possible suppliers, evaluate offers, etc. in connection with purchase of district heating equipment. This objective has been reached by using rehabilitation of sub-stations as an example requested by Sofia DH. The project was originally planned to be finalized end of 1995, but due to the extensions of the scope of work, the project has been prolonged until end 1997. The following main activities were accomplished: Preparation of a detailed work plan; Collection of background information; Discussion and advice about technical specifications and tender documents for sub-station rehabilitation; Input to terms of reference for a master plan study; Input to technical specification for heat meters; Collection of ideas for topics and examples related to dissemination of information to consumers about matters related to district heating consumption. (EG)

  11. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model.

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals.

  12. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103

  13. Neural underpinnings of decision strategy selection: a review and a theoretical model

    Directory of Open Access Journals (Sweden)

    Szymon Wichary

    2016-11-01

    Full Text Available In multi-attribute choice, decision makers use various decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a unifying neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g. affect, stress on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models explaining this process. We also present the neurocognitive Bottom-Up Model of Strategy Selection (BUMSS. The model assumes that the use of the rational, normative Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: 1 cue weight computation, 2 gain modulation, and 3 weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neurophysiological indices.

  14. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  15. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Science.gov (United States)

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic

  16. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Directory of Open Access Journals (Sweden)

    Shiori Yabe

    Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the

  17. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  18. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  19. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  20. Peer-assisted learning model enhances clinical clerk's procedural skills.

    Science.gov (United States)

    Huang, Chia-Chang; Hsu, Hui-Chi; Yang, Ling-Yu; Chen, Chen-Huan; Yang, Ying-Ying; Chang, Ching-Chih; Chuang, Chiao-Lin; Lee, Wei-Shin; Lee, Fa-Yauh; Hwang, Shinn-Jang

    2018-05-17

    Failure to transfer procedural skills learned in a laboratory to the bedside is commonly due to a lack of peer support/stimulation. A digital platform (Facebook) allows new clinical clerks to share experiences and tips that help augment their procedural skills in a peer-assisted learning/teaching method. This study aims to investigate the effectiveness of the innovation of using the digital platform to support the transfer of laboratory-trained procedural skills in the clinical units. Volunteer clinical clerks (n = 44) were enrolled into the peer-assisted learning (PAL) group, which was characterized by the peer-assisted learning of procedural skills during their final 3-month clinical clerkship block. Other clerks (n = 51) did not join the procedural skills-specific Facebook group and served as the self-directed learning regular group. The participants in both the PAL and regular groups completed pre- and post-intervention self-assessments for general self-assessed efficiency ratings (GSER) and skills specific self-assessed efficiency ratings (SSSER) for performing vein puncture, intravenous (IV) catheter and nasogastric (NG) tube insertion. Finally, all clerks received the post-intervention 3-station Objective Structured Clinical Skills Examination (OSCE) to test their proficiency for the abovementioned three procedural skills. Higher cumulative numbers of vein punctures, IV catheter insertions and NG tube insertions at the bedside were carried out by the PAL group than the regular group. A greater improvement in GSERs and SSSERs for medical procedures was found in the PAL group than in the regular group. The PAL group obtained higher procedural skills scores in the post-intervention OSCEs than the regular group. Our study suggested that the implementation of a procedural skill-specific digital platform effectively helps clerks to transfer laboratory-trained procedural skills into the clinical units. In comparison with the regular self-directed learning

  1. Factors influencing creep model equation selection

    International Nuclear Information System (INIS)

    Holdsworth, S.R.; Askins, M.; Baker, A.; Gariboldi, E.; Holmstroem, S.; Klenk, A.; Ringel, M.; Merckling, G.; Sandstrom, R.; Schwienheer, M.; Spigarelli, S.

    2008-01-01

    During the course of the EU-funded Advanced-Creep Thematic Network, ECCC-WG1 reviewed the applicability and effectiveness of a range of model equations to represent the accumulation of creep strain in various engineering alloys. In addition to considering the experience of network members, the ability of several models to describe the deformation characteristics of large single and multi-cast collations of ε(t,T,σ) creep curves have been evaluated in an intensive assessment inter-comparison activity involving three steels, 21/4 CrMo (P22), 9CrMoVNb (Steel-91) and 18Cr13NiMo (Type-316). The choice of the most appropriate creep model equation for a given application depends not only on the high-temperature deformation characteristics of the material under consideration, but also on the characteristics of the dataset, the number of casts for which creep curves are available and on the strain regime for which an analytical representation is required. The paper focuses on the factors which can influence creep model selection and model-fitting approach for multi-source, multi-cast datasets

  2. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  3. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  4. Using Video Modeling with Voiceover Instruction Plus Feedback to Train Staff to Implement Direct Teaching Procedures.

    Science.gov (United States)

    Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A

    2016-06-01

    Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.

  5. Actor-Network Procedures

    NARCIS (Netherlands)

    Pavlovic, Dusko; Meadows, Catherine; Ramanujam, R.; Ramaswamy, Srini

    2012-01-01

    In this paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices, where these new procedures run; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks, as an extension

  6. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  7. Performance audit procedures for opacity monitors

    International Nuclear Information System (INIS)

    Plaisance, S.J.; Peeler, J.W.

    1987-04-01

    This manual contains monitor-specific performance audit procedures and data forms for use in conducting audits of installed opacity continuous emission monitoring systems (CEMS). General auditing procedures and acceptance limits for various audit criteria are discussed. Practical considerations and common problems encountered in conducting audits are delineated, and recommendations are included to optimize the successful completion of performance audits. Performance audit procedures and field-data forms were developed for six common opacity CEMS: (1) Lear Siegler, Inc. Model RM-41; (2) Lear Siegler, Inc. Model RM-4; (3) Dynatron Model 1100; (4) Thermo Electron, Inc. Model 400; (5) Thermo Electron, Inc. Model 1000A; and (6) Enviroplan Model D-R280 AV. Generic audit procedures are included for use in evaluating opacity CEMS with multiple transmissometers and combiner devices. In addition, several approaches for evaluating the zero-alignment or clear-path zero response are described. The zero-alignment procedures are included since the factor is fundamental to the accuracy of opacity monitoring data, even though the zero-alignment checks cannot usually be conducted during a performance audit

  8. Study on the selection of steel or prestressed concrete cable stayed bridge by using diaggregate behavioral model; Hishukei rojitto model wo mochiita koshachokyo to PC shachokyo no kyoshiki sentaku ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Araki, Y.; Furuta, H.; Maeda, E.; Furukawa, K. [Yamaguchi University, Yamaguchi (Japan). Faculty of Engineering

    1994-09-15

    A discussion was given to make clear the selection factors in selecting bridge types (steel cable stayed bridge and prestressed concrete cable stayed bridge). The discussion is intended to consider the future development of both bridges. Quoting a cable stayed bridge with a span length of 250 m that can be selected from either bridge, an evaluation was made by using calculations that uses the disaggregate behavioral model theory based on questionnaire responses from engineers. The analysis uses the following procedure: utility function values of the two term behavioral model (having two choices) are specified and the characteristics variables are selected; the data are prepared according to the specifications, which are used to estimate parameters by a maximum likelihood estimation method; and estimation amount is estimated by using the covariance matrix, which is given a `t` value verification. The conclusions: what gives the large effect to the selection is the engineering capacity and sociality; the result contains vocational consciousness; the economy is measured by the large weight of construction cost for the upper part structures and cost required for large repairs; materials affect largely the reliability, and so does the technological level the constructibility; the comprehensive technical capability, freedom in design and the experience attained by Japan have great effects in terms of technological capability. 10 refs., 10 figs., 11 tabs.

  9. Simplified proceeding as a civil procedure model

    Directory of Open Access Journals (Sweden)

    Олексій Юрійович Зуб

    2016-01-01

    Full Text Available Currently the directions for the development of modern civil procedural law such as optimization, facilitation, forwarding proceedings promoting the increase of the civil procedure efficiency factor are of peculiar importance. Their results are occurrence and functionality of simplified proceedings system designed to facilitate significantly hearing some categories of cases, promotion of their consideration within reasonable time and reduce legal expenses so far as it is possible. The category “simplified proceedings” in the native science of the procedural law is underexamined. A good deal of scientists-processualists were limited to studying summary (in the context of optimization as a way to improve the civil procedural form, summary proceedings and procedures functioning in terms of the mentioned proceedings, consideration of case in absentia as well as their modification. Among the Ukrainian scientist who studied some aspects of the simplified proceedings are: E. A. Belyanevych, V. I. Bobrik, S. V. Vasilyev, M. V. Verbitska, S. I. Zapara, A. A. Zgama, V. V. Komarov, D. D. Luspenuk, U. V. Navrotska, V. V. Protsenko, T. V. Stepanova, E. A. Talukin, S. Y. Fursa, M. Y. Shtefan others. The problems of the simplified proceedings were studied by the foreign scientists as well, such as: N. Andrews, Y. Y. Grubanon, N. A. Gromoshina, E. P. Kochanenko, J. Kohler, D. I. Krumskiy, E. M. Muradjan, I. V. Reshetnikova, U. Seidel, N. V. Sivak, M. Z. Shvarts, V. V. Yarkov and others. The paper objective is to develop theoretically supported, practically reasonable notion of simplified proceedings in the civil process, and also basing on the notion of simplified proceedings, international experience of the legislative regulation of simplified proceedings, native and foreign doctrine, to distinguish essential features of simplified proceedings in the civil process and to describe them. In the paper we generated the notion of simplified proceedings that

  10. Evaluating the influence of motor control on selective attention through a stochastic model: the paradigm of motor control dysfunction in cerebellar patient.

    Science.gov (United States)

    Veneri, Giacomo; Federico, Antonio; Rufa, Alessandra

    2014-01-01

    Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."

  11. An innovative 3-D numerical modelling procedure for simulating repository-scale excavations in rock - SAFETI

    Energy Technology Data Exchange (ETDEWEB)

    Young, R. P.; Collins, D.; Hazzard, J.; Heath, A. [Department of Earth Sciences, Liverpool University, 4 Brownlow street, UK-0 L69 3GP Liverpool (United Kingdom); Pettitt, W.; Baker, C. [Applied Seismology Consultants LTD, 10 Belmont, Shropshire, UK-S41 ITE Shrewsbury (United Kingdom); Billaux, D.; Cundall, P.; Potyondy, D.; Dedecker, F. [Itasca Consultants S.A., Centre Scientifique A. Moiroux, 64, chemin des Mouilles, F69130 Ecully (France); Svemar, C. [Svensk Karnbranslemantering AB, SKB, Aspo Hard Rock Laboratory, PL 300, S-57295 Figeholm (Sweden); Lebon, P. [ANDRA, Parc de la Croix Blanche, 7, rue Jean Monnet, F-92298 Chatenay-Malabry (France)

    2004-07-01

    This paper presents current results from work performed within the European Commission project SAFETI. The main objective of SAFETI is to develop and test an innovative 3D numerical modelling procedure that will enable the 3-D simulation of nuclear waste repositories in rock. The modelling code is called AC/DC (Adaptive Continuum/ Dis-Continuum) and is partially based on Itasca Consulting Group's Particle Flow Code (PFC). Results are presented from the laboratory validation study where algorithms and procedures have been developed and tested to allow accurate 'Models for Rock' to be produced. Preliminary results are also presented on the use of AC/DC with parallel processors and adaptive logic. During the final year of the project a detailed model of the Prototype Repository Experiment at SKB's Hard Rock Laboratory will be produced using up to 128 processors on the parallel super computing facility at Liverpool University. (authors)

  12. A Preference Model for Supplier Selection Based on Hesitant Fuzzy Sets

    Directory of Open Access Journals (Sweden)

    Zhexuan Zhou

    2018-03-01

    Full Text Available The supplier selection problem is a widespread concern in the modern commercial economy. Ranking suppliers involves many factors and poses significant difficulties for decision makers. Supplier selection is a multi-criteria and multi-objective problem, which leads to decision makers forming their own preferences. In addition, there are both quantifiable and non-quantifiable attributes related to their preferences. To solve this problem, this paper presents a preference model based on hesitant fuzzy sets (HFS to select suppliers. The cost and service quality of suppliers are the main considerations in the proposed model. HFS with interactive and multi-criteria decision making are used to evaluate the non-quantifiable attributes of service quality, which include competitive display, qualification ability, suitability and competitiveness of solutions, and relational fitness and dynamics. Finally, a numerical example of supplier selection for a high-end equipment manufacturer is provided to illustrate the applicability of the proposed model. The preferences of a decision maker are then analyzed by altering preference parameters.

  13. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  14. Response to selection in finite locus models with nonadditive effects

    NARCIS (Netherlands)

    Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jørn Rind; Bijma, Piter; Sørensen, Anders Christian

    2017-01-01

    Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive

  15. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  16. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    Science.gov (United States)

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  17. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  18. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  19. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  20. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  1. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  2. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  3. Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes

    OpenAIRE

    Corradi, Valentina; Swanson, Norman R.

    2005-01-01

    Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...

  4. A model for the sustainable selection of building envelope assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Huedo, Patricia, E-mail: huedo@uji.es [Universitat Jaume I (Spain); Mulet, Elena, E-mail: emulet@uji.es [Universitat Jaume I (Spain); López-Mesa, Belinda, E-mail: belinda@unizar.es [Universidad de Zaragoza (Spain)

    2016-02-15

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  5. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  6. BSL-3 laboratory practices in the United States: comparison of select agent and non-select agent facilities.

    Science.gov (United States)

    Richards, Stephanie L; Pompei, Victoria C; Anderson, Alice

    2014-01-01

    New construction of biosafety level 3 (BSL-3) laboratories in the United States has increased in the past decade to facilitate research on potential bioterrorism agents. The Centers for Disease Control and Prevention inspect BSL-3 facilities and review commissioning documentation, but no single agency has oversight over all BSL-3 facilities. This article explores the extent to which standard operating procedures in US BSL-3 facilities vary between laboratories with select agent or non-select agent status. Comparisons are made for the following variables: personnel training, decontamination, personal protective equipment (PPE), medical surveillance, security access, laboratory structure and maintenance, funding, and pest management. Facilities working with select agents had more complex training programs and decontamination procedures than non-select agent facilities. Personnel working in select agent laboratories were likely to use powered air purifying respirators, while non-select agent laboratories primarily used N95 respirators. More rigorous medical surveillance was carried out in select agent workers (although not required by the select agent program) and a higher level of restrictive access to laboratories was found. Most select agent and non-select agent laboratories reported adequate structural integrity in facilities; however, differences were observed in personnel perception of funding for repairs. Pest management was carried out by select agent personnel more frequently than non-select agent personnel. Our findings support the need to promote high quality biosafety training and standard operating procedures in both select agent and non-select agent laboratories to improve occupational health and safety.

  7. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  8. Sex Role Learning: A Test of the Selective Attention Hypothesis.

    Science.gov (United States)

    Bryan, Janice Westlund; Luria, Zella

    This paper reports three studies designed to determine whether children show selective attention and/or differential memory to slide pictures of same-sex vs. opposite-sex models and activities. Attention was measured using a feedback EEG procedure, which measured the presence or absence of alpha rhythms in the subjects' brains during presentation…

  9. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  10. Within-host selection of drug resistance in a mouse model reveals dose-dependent selection of atovaquone resistance mutations

    NARCIS (Netherlands)

    Nuralitha, Suci; Murdiyarso, Lydia S.; Siregar, Josephine E.; Syafruddin, Din; Roelands, Jessica; Verhoef, Jan; Hoepelman, Andy I.M.; Marzuki, Sangkot

    2017-01-01

    The evolutionary selection of malaria parasites within an individual host plays a critical role in the emergence of drug resistance. We have compared the selection of atovaquone resistance mutants in mouse models reflecting two different causes of failure of malaria treatment, an inadequate

  11. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    Science.gov (United States)

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. TMACS test procedure TP003: Graphics. Revision 5

    International Nuclear Information System (INIS)

    Scanlan, P.K.

    1994-01-01

    The TMACS Software Project Test Procedures translate the project's acceptance criteria into test steps. Software releases are certified when the affected Test Procedures are successfully performed and the customers authorize installation of these changes. This Test Procedure addresses the graphics requirements of the TMACS. The features to be tested are the data display graphics and the graphic elements that provide for operator control and selection of displays

  13. TMACS test procedure TP003: Graphics. Revision 6

    International Nuclear Information System (INIS)

    Scanlan, P.K.; Washburn, S.

    1994-01-01

    The TMACS Software Project Test Procedures translate the project's acceptance criteria into test steps. Software releases are certified when the affected Test Procedures are successfully performed and the customers authorize installation of these changes. This Test Procedure addresses the graphics requirements of the TMACS. The features to be tested are the data display graphics and the graphic elements that provide for operator control and selection of displays

  14. Interval-valued intuitionistic fuzzy multi-criteria model for design concept selection

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-09-01

    Full Text Available This paper presents a new approach for design concept selection by using an integrated Fuzzy Analytical Hierarchy Process (FAHP and an Interval-valued intuitionistic fuzzy modified TOP-SIS (IVIF-modified TOPSIS model. The integrated model which uses the improved score func-tion and a weighted normalized Euclidean distance method for the calculation of the separation measures of alternatives from the positive and negative intuitionistic ideal solutions provides a new approach for the computation of intuitionistic fuzzy ideal solutions. The results of the two approaches are integrated using a reflection defuzzification integration formula. To ensure the feasibility and the rationality of the integrated model, the method is successfully applied for eval-uating and selecting some design related problems including a real-life case study for the selec-tion of the best concept design for a new printed-circuit-board (PCB and for a hypothetical ex-ample. The model which provides a novel alternative, has been compared with similar computa-tional methods in the literature.

  15. Experimental Testing Procedures and Dynamic Model Validation for Vanadium Redox Flow Battery Storage System

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per Bromand

    2013-01-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing...... efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs...

  16. 78 FR 47047 - Proposed Policy for Discontinuance of Certain Instrument Approach Procedures

    Science.gov (United States)

    2013-08-02

    ... the cancellation of certain Non-directional Beacon (NDB) and Very High Frequency (VHF) Omnidirectional... approach procedures. The FAA proposes specific criteria to guide the identification and selection of... selection of potential NDB and VOR procedures for cancellation. Once the criteria are established and the...

  17. An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard

    Directory of Open Access Journals (Sweden)

    Xiulan Wang

    2014-01-01

    it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.

  18. On market timing and portfolio selectivity: modifying the Henriksson-Merton model

    OpenAIRE

    Goś, Krzysztof

    2011-01-01

    This paper evaluates selected functionalities of the parametrical Henriksson-Merton test, a tool designed for measuring the market timing and portfolio selectivity capabilities. It also provides a solution to two significant disadvantages of the model: relatively indirect interpretation and vulnerability to parameter insignificance. The model has been put to test on a group of Polish mutual funds in a period of 63 months (January 2004 – March 2009), providing unsatisfa...

  19. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  20. Mathematical Model of the Emissions of a selected vehicle

    Directory of Open Access Journals (Sweden)

    Matušů Radim

    2014-10-01

    Full Text Available The article addresses the quantification of exhaust emissions from gasoline engines during transient operation. The main targeted emissions are carbon monoxide and carbon dioxide. The result is a mathematical model describing the production of individual emissions components in all modes (static and dynamic. It also describes the procedure for the determination of emissions from the engine’s operating parameters. The result is compared with other possible methods of measuring emissions. The methodology is validated using the data from an on-road measurement. The mathematical model was created on the first route and validated on the second route.

  1. Pain Management for Gynecologic Procedures in the Office.

    Science.gov (United States)

    Ireland, Luu Doan; Allen, Rebecca H

    2016-02-01

    Satisfactory pain control for women undergoing office gynecologic procedures is critical for both patient comfort and procedure success. Therefore, it is important for clinicians to be aware of the safety and efficacy of different pain control regimens. This article aimed to review the literature regarding pain control regimens for procedures such as endometrial biopsy, intrauterine device insertion, colposcopy and loop electrosurgical excisional procedure, uterine aspiration, and hysteroscopy. A search of published literature using PubMed was conducted using the following keywords: "pain" or "anesthesia." These terms were paired with the following keywords: "intrauterine device" or "IUD," "endometrial biopsy," "uterine aspiration" or "abortion," "colposcopy" or "loop electrosurgical excisional procedure" or "LEEP," "hysteroscopy" or "hysteroscopic sterilization." The search was conducted through July 2015. Articles were hand reviewed and selected by the authors for study quality. Meta-analyses and randomized controlled trials were prioritized. Although local anesthesia is commonly used for gynecologic procedures, a multimodal approach may be more effective including oral medication, a dedicated emotional support person, and visual or auditory distraction. Women who are nulliparous, are postmenopausal, have a history of dysmenorrhea, or suffer from anxiety are more likely to experience greater pain with gynecologic procedures. Evidence for some interventions exists; however, the interpretation of intervention comparisons is limited by the use of different regimens, pain measurement scales, patient populations, and procedure techniques. There are many options for pain management for office gynecologic procedures, and depending on the procedure, different modalities may work best. The importance of patient counseling and selection cannot be overstated.

  2. A single-photon ecat reconstruction procedure based on a PSF model

    International Nuclear Information System (INIS)

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  3. Procedures and Methods for Cross-community Online Deliberation

    Directory of Open Access Journals (Sweden)

    Cyril Velikanov

    2010-09-01

    Full Text Available In this paper we introduce our model of self-regulated mass online deliberation, and apply it to a context of cross-border deliberation involving translation of contributions between participating languages, and then to a context of cross-community online deliberation for dispute resolution, e.g. between opposing ethnic or religious communities. In such a cross-border or cross-community context, online deliberation should preferably progress as a sequence of segmented phases each followed by a combining phase. In a segmented phase, each community deliberates separately, and selects their best contributions for being presented to all other communities. Selection is made by using our proposed mechanism of mutual moderation and appraisal of contributions by participants themselves. In the subsequent combining phase, the selected contributions are translated (by volunteering or randomly selected participants among those who have specified appropriate language skills and presented to target segments for further appraisal and commenting. Our arguments in support of the proposed mutual moderation and appraisal procedures remain mostly speculative, as the whole subject of mass online self-regulatory deliberation still remains largely unexplored, and there exist no practical realisation of it .

  4. Predictive models reduce talent development costs in female gymnastics.

    Science.gov (United States)

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  5. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  6. Towards a pro-health food-selection model for gatekeepers in ...

    African Journals Online (AJOL)

    The purpose of this study was to develop a pro-health food selection model for gatekeepers of Bulawayo high-density suburbs in Zimbabwe. Gatekeepers in five suburbs constituted the study population from which a sample of 250 subjects was randomly selected. Of the total respondents (N= 182), 167 had their own ...

  7. 48 CFR 570.105-2 - Two-phase design-build selection procedures.

    Science.gov (United States)

    2010-10-01

    ... lease construction projects with options to purchase the real property leased. Use the procedures in.... (iii) The capability and experience of potential contractors. (iv) The suitability of the project for...

  8. Selection of procedures for inservice inspections; Auswahl der Verfahren fuer wiederkehrende Pruefungen

    Energy Technology Data Exchange (ETDEWEB)

    Brast, G [Preussische Elektrizitaets-AG (Preussenelektra), Hannover (Germany); Britz, A [Bayernwerk AG, Muenchen (Germany); Maier, H J [Stuttgart Univ. (Germany). Staatliche Materialpruefungsanstalt; Seidenkranz, T [TUEV Energie- und Systemtechnik GmbH, Mannheim (Germany)

    1998-11-01

    At present, selection of procedures for inservice inspection has to take into account the legal basis, i.e. the existing regulatory codes, and the practical aspects, i.e. experience and information obtained by the general, initial inservice inspection or performance data obtained by the latest, recurrent inspection. However, regulatory codes are being reviewed to a certain extent in order to permit integration of technological progress. Depending on the degree of availability in future, of inspection task-specific, sensitive and qualified NDE techniques for inservice inspections (`risk based ISI`), the framework of defined inspection intervals, sites, and detection limits will be broken up and altered in response to progress made. This opens up new opportunities for an optimization of inservice inspections for proof of component integrity. (orig./CB) [Deutsch] Zur Zeit muss sich die Auswahl der Pruefverfahren an den gueltigen Regelwerken und, da es sich um wiederkehrende Pruefungen handelt, an der Basispruefung bzw. der letzten wiederkehrenden Pruefung orientieren. Jedoch vollzieht sich zur Zeit eine Oeffnung der Regelwerke, mit der man auch der Weiterentwicklung der Prueftechniken Rechnung traegt. In dem Masse, wie zukuenftig auf die Pruefaufgabe/Pruefaussage optimal abgestimmte und qualifizierte Prueftechniken mit einer hohen Nachweisempfindlichkeit am Bauteil fuer zielgerichtete wiederkehrende Pruefungen (als `risk based ISI`) zur Verfuegung stehen, wird der Rahmen mit festgelegten Pruefintervallen, Prueforten und festen Registriergrenzen gesprengt und variabel gestaltet werden koennen. Damit ergeben sich neue Moeglichkeiten fuer eine Optimierung der WKP zum Nachweis der Integritaet des Bauteils. (orig./MM)

  9. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. A procedure for selection on marking in hardwoods

    Science.gov (United States)

    George R., Jr. Trimble; Joseph J. Mendel; Richard A. Kennell

    1974-01-01

    This method of applying individual-tree selection silviculture to hardwood stands combines silvicultural considerations with financial maturity guidelines into a tree-marking system. To develop this system it was necessary to determine rates of return based on 4/4 lumber, for many of the important Appalachian species. Trees were viewed as capital investments that...

  11. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  12. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  13. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  14. 42 CFR 431.814 - Sampling plan and procedures.

    Science.gov (United States)

    2010-10-01

    ... reliability of the reduced sample. (4) The sample selection procedure. Systematic random sampling is... sampling, and yield estimates with the same or better precision than achieved in systematic random sampling... 42 Public Health 4 2010-10-01 2010-10-01 false Sampling plan and procedures. 431.814 Section 431...

  15. Statistical selection : a way of thinking !

    NARCIS (Netherlands)

    Laan, van der P.; Aarts, E.H.L.; Eikelder, ten H.M.M.; Hemerik, C.; Rem, M.

    1995-01-01

    Statistical selection of the best population is discussed in general terms and the principles of statistical selection procedures are presented. Advantages and disadvantages of Subset Selection, one of the main approaches, are indicated. The selection of an almost best population is considered and

  16. Statistical selection : a way of thinking!

    NARCIS (Netherlands)

    Laan, van der P.

    1995-01-01

    Statistical selection of the best population is discussed in general terms and the principles of statistical selection procedures are presented. Advantages and disadvantages of Subset Selection, one of the main approaches, are indicated. The selection of an almost best population is considered and

  17. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  18. An optimized procedure for preconcentration, determination and on-line recovery of palladium using highly selective diphenyldiketone-monothiosemicarbazone modified silica gel

    International Nuclear Information System (INIS)

    Sharma, R.K.; Pandey, Amit; Gulati, Shikha; Adholeya, Alok

    2012-01-01

    Highlights: ► Diphenyldiketone-monothiosemicarbazone modified silica gel. ► Highly selective, efficient and reusable chelating resin. ► Solid phase extraction system for on-line separation and preconcentration of Pd(II) ions. ► Application in catalytic converter and spiked tap water samples for on-line recovery of Pd(II) ions. - Abstract: A novel, highly selective, efficient and reusable chelating resin, diphenyldiketone-monothiosemicarbazone modified silica gel, was prepared and applied for the on-line separation and preconcentration of Pd(II) ions in catalytic converter and spiked tap water samples. Several parameters like effect of pH, sample volume, flow rate, type of eluent, and influence of various ionic interferences, etc. were evaluated for effective adsorption of palladium at trace levels. The resin was found to be highly selective for Pd(II) ions in the pH range 4–5 with a very high sorption capacity of 0.73 mmol/g and preconcentration factor of 335. The present environment friendly procedure has also been applied for large-scale extraction by employing the use of newly designed reactor in which on-line separation and preconcentration of Pd can be carried out easily and efficiently in short duration of time.

  19. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  20. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...