WorldWideScience

Sample records for model selection criterion

  1. Model selection criterion in survival analysis

    Science.gov (United States)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  2. Akaike information criterion to select well-fit resist models

    Science.gov (United States)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  3. Modelling of Lime Kiln Using Subspace Method with New Order Selection Criterion

    Directory of Open Access Journals (Sweden)

    Li Zhang

    2014-01-01

    Full Text Available This paper is taking actual control demand of rotary kiln as background and builds a calcining belt state space model using PO-Moesp subspace method. A novel order-delay double parameters error criterion (ODC is presented to reduce the modeling order. The proposed subspace order identification method takes into account the influence of order and delay on model error criterion simultaneously. For the introduction of the delay factors, the order is reduced dramatically in the system modeling. Also, in the data processing part sliding-window method is adopted for stripping delay factor from historical data. For this, the parameters can be changed flexibly. Some practical problems in industrial kiln process modeling are also solved. Finally, it is applied to an industrial kiln case.

  4. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  5. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  6. New dental implant selection criterion based on implant design

    OpenAIRE

    El-Anwar, Mohamed I.; El-Zawahry, Mohamed M.; Ibraheem, Eman M.; Nassani, Mohammad Zakaria; ElGabry, Hisham

    2017-01-01

    Objective: A comparative study between threaded and plain dental implant designs was performed to find out a new criterion for dental implant selection. Materials and Methods: Several dental implant designs with a systematic increase in diameter and length were positioned in a cylindrical-shaped bone section and analyzed using finite element method. Four loading types were tested on different dental implant designs; tension of 50 N, compression of 100 N, bending of 20 N, and torque of 2 Nm, t...

  7. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  8. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  9. New dental implant selection criterion based on implant design.

    Science.gov (United States)

    El-Anwar, Mohamed I; El-Zawahry, Mohamed M; Ibraheem, Eman M; Nassani, Mohammad Zakaria; ElGabry, Hisham

    2017-01-01

    A comparative study between threaded and plain dental implant designs was performed to find out a new criterion for dental implant selection. Several dental implant designs with a systematic increase in diameter and length were positioned in a cylindrical-shaped bone section and analyzed using finite element method. Four loading types were tested on different dental implant designs; tension of 50 N, compression of 100 N, bending of 20 N, and torque of 2 Nm, to derive design curves. Better stress distribution on both spongy and cortical bone was noted with an increase in dental implant diameter and length. With the increase in dental implant side area, a stress reduction in the surrounding bones was observed, where threaded dental implants showed better behavior over the plain ones. Increasing value of ratio between dental implant side area and its cross-sectional area reduces stresses transferred to cortical and spongy bones. The use of implants with higher ratio of side area to cross-section area, especially with weak jaw bone, is recommended.

  10. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  11. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  12. Optimization of the criterion for selecting photodetectors for photon counting and its application

    International Nuclear Information System (INIS)

    Gulakov, I.R.; Pertsev, A.N.; Kholondyrev, S.V.

    1985-01-01

    An optimized version of photo-detectors selection criteria for photon counting is suggested. Use of the criterion for selecting a light boundary wave length during the transition from one photo-detector to another and for choosing dissectors (coordinate-sensitive detectors) is considered. The criterion suggested possesses a convenient simplicity for operative experimental application and turns out the most effective for cases when signal pulse counting rate exceeds shadow pulse counting rates. The criterion is easily generalized for the ares of high counting rates when due to the dead time effect the Poisson flow nature of recorded events is distorted

  13. Exclusion as a Criterion for Selecting Socially Vulnerable Population Groups

    Directory of Open Access Journals (Sweden)

    Aleksandra Anatol’evna Shabunova

    2016-05-01

    Full Text Available The article considers theoretical aspects of a scientific research “The Mechanisms for Overcoming Mental Barriers of Inclusion of Socially Vulnerable Categories of the Population for the Purpose of Intensifying Modernization in the Regional Community” (RSF grant No. 16-18-00078. The authors analyze the essence of the category of “socially vulnerable groups” from the legal, economic and sociological perspectives. The paper shows that the economic approach that uses the criterion “the level of income and accumulated assets” when defining vulnerable population groups prevails in public administration practice. The legal field of the category based on the economic approach is defined by the concept of “the poor and socially unprotected categories of citizens”. With the help of the analysis of theoretical and methodological aspects of this issue, the authors show that these criteria are a necessary but not sufficient condition for classifying the population as being socially vulnerable. Foreign literature associates the phenomenon of vulnerability with the concept of risks, with the possibility of households responding to them and with the likelihood of losing the well-being (poverty theory; research areas related to the means of subsistence, etc.. The asset-based approaches relate vulnerability to the poverty that arises due to lack of access to tangible and intangible assets. Sociological theories presented by the concept of social exclusion pay much attention to the breakdown of social ties as a source of vulnerability. The essence of social exclusion consists in the inability of people to participate in important aspects of social life (in politics, labor markets, education and healthcare, cultural life, etc. though they have all the rights to do so. The difference between the concepts of exclusion and poverty is manifested in the displacement of emphasis from income inequality to limited access to rights. Social exclusion is

  14. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  15. Sperm head's birefringence: a new criterion for sperm selection.

    Science.gov (United States)

    Gianaroli, Luca; Magli, M Cristina; Collodel, Giulia; Moretti, Elena; Ferraretti, Anna P; Baccetti, Baccio

    2008-07-01

    To investigate the characteristics of birefringence in human sperm heads and apply polarization microscopy for sperm selection at intracytoplasmic sperm injection (ICSI). Prospective randomized study. Reproductive Medicine Unit, Società Italiana Studi Medicina della Riproduzione, Bologna, Italy. A total of 112 male patients had birefringent sperm selected for ICSI (study group). The clinical outcome was compared with that obtained in 119 couples who underwent a conventional ICSI cycle (control group). The proportion of birefringent spermatozoa was evaluated before and after treatment in relation to the sperm sample quality. Embryo development and clinical outcome in the study group were compared with those in the controls. Proportion of birefringent sperm heads, rates of fertilization, cleavage, pregnancy, implantation, and ongoing implantation. The proportion of birefringent spermatozoa was significantly higher in normospermic samples when compared with oligoasthenoteratospermic samples with no progressive motility and testicular sperm extraction samples. Although fertilization and cleavage rates did not differ between the study and control groups, in the most severe male factor condition (oligoasthenoteratospermic with no progressive motility and testicular sperm extraction), the rates of clinical pregnancy, ongoing pregnancy, and implantation were significantly higher in the study group versus the controls. The analysis of birefringence in the sperm head could represent both a diagnostic tool and a novel method for sperm selection.

  16. 76 FR 33740 - Final Priorities and Selection Criterion; National Institute on Disability and Rehabilitation...

    Science.gov (United States)

    2011-06-09

    ... selection criterion. Public Comment: In response to our invitation in the NPP, nine parties submitted... treating SCI and innovative approaches to assessing outcomes. This commenter stated that it would be more reasonable to require projects to test either innovative approaches to treating SCI or innovative approaches...

  17. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  18. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    Science.gov (United States)

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature

  19. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    Science.gov (United States)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  20. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    Science.gov (United States)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  1. SNP sets selection under mutual information criterion, application to F7/FVII dataset.

    Science.gov (United States)

    Brunel, H; Perera, A; Buil, A; Sabater-Lleal, M; Souto, J C; Fontcuberta, J; Vallverdu, M; Soria, J M; Caminal, P

    2008-01-01

    One of the main goals of human genetics is to find genetic markers related to complex diseases. In blood coagulation process, it is known that genetic variability in F7 gene is the most responsible for observed variations in FVII levels in blood. In this work, we propose a method for selecting sets of Single Nucleotide Polymorphisms (SNPs) significantly correlated with a phenotype (FVII levels). This method employs a feature selection algorithm (variant of Sequential Forward Selection, SFS) based on a criterion of statistical significance of a mutual information functional. This algorithm is applied to a sample of independent individuals from the GAIT project. Main SNPs found by the algorithm are in correspondence with previous results published using family-based techniques.

  2. Failure criterion effect on solid production prediction and selection of completion solution

    Directory of Open Access Journals (Sweden)

    Dariush Javani

    2017-12-01

    Full Text Available Production of fines together with reservoir fluid is called solid production. It varies from a few grams or less per ton of reservoir fluid posing only minor problems, to catastrophic amount possibly leading to erosion and complete filling of the borehole. This paper assesses solid production potential in a carbonate gas reservoir located in the south of Iran. Petrophysical logs obtained from the vertical well were employed to construct mechanical earth model. Then, two failure criteria, i.e. Mohr–Coulomb and Mogi–Coulomb, were used to investigate the potential of solid production of the well in the initial and depleted conditions of the reservoir. Using these two criteria, we estimated critical collapse pressure and compared them to the reservoir pressure. Solid production occurs if collapse pressure is greater than pore pressure. Results indicate that the two failure criteria show different estimations of solid production potential of the studied reservoir. Mohr–Coulomb failure criterion estimated solid production in both initial and depleted conditions, where Mogi–Coulomb criterion predicted no solid production in the initial condition of reservoir. Based on Mogi–Coulomb criterion, the well may not require completion solutions like perforated liner, until at least 60% of reservoir pressure was depleted which leads to decrease in operation cost and time.

  3. Robust and stable gene selection via Maximum-Minimum Correntropy Criterion.

    Science.gov (United States)

    Mohammadi, Majid; Sharifi Noghabi, Hossein; Abed Hodtani, Ghosheh; Rajabi Mashhadi, Habib

    2016-03-01

    One of the central challenges in cancer research is identifying significant genes among thousands of others on a microarray. Since preventing outbreak and progression of cancer is the ultimate goal in bioinformatics and computational biology, detection of genes that are most involved is vital and crucial. In this article, we propose a Maximum-Minimum Correntropy Criterion (MMCC) approach for selection of informative genes from microarray data sets which is stable, fast and robust against diverse noise and outliers and competitively accurate in comparison with other algorithms. Moreover, via an evolutionary optimization process, the optimal number of features for each data set is determined. Through broad experimental evaluation, MMCC is proved to be significantly better compared to other well-known gene selection algorithms for 25 commonly used microarray data sets. Surprisingly, high accuracy in classification by Support Vector Machine (SVM) is achieved by less than 10 genes selected by MMCC in all of the cases. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. DESTRUCTION CRITERION IN MODEL OF NON-LINEAR ELASTIC PLASTIC MEDIUM

    Directory of Open Access Journals (Sweden)

    O. L. Shved

    2014-01-01

    Full Text Available The paper considers a destruction criterion in a specific phenomenological model of elastic plastic medium which significantly differs from the known criteria. In case of vector interpretation of rank-2 symmetric tensors yield surface in the Cauchy stress space is formed by closed piecewise concave surfaces of its deviator sections with due account of experimental data. Section surface is determined by normal vector which is selected from two private vectors of criterial “deviator” operator. Such selection is not always possible in the case of anisotropy growth. It is expected that destruction can only start when a process point in the stress space is located in the current deviator section of the yield surface. It occurs when a critical point appears in the section, and a private value of an operator becomes N-fold in the point that determines the private vector corresponding to the normal vector. Unique and reasonable selection of the normal vector becomes impossible in the critical point and an yield criteria loses its significance in the point.When the destruction initiation is determined there is a possibility of a special case due to the proposed conic form of the yield surface. The deviator section degenerates into the point at the yield surface peak. Criterion formulation at the surface peak lies in the fact that there is no physically correct solution while using a state equation in regard to elastic distortion measures with a fixed tensor of elastic turn. Such usage of the equation is always possible for the rest points of the yield surface and it is considered as an obligatory condition for determination of the deviator section. A critical point is generally absent at any deviator section of the yield surface for isotropic material. A limiting value of the mean stress has been calculated at uniform tension.

  5. Characteristics of Criterion-Referenced Instruments: Implications for Materials Selection for the Learning Disabled.

    Science.gov (United States)

    Blasi, Joyce F.

    Discussed are characteristics of criterion referenced reading tests for use with learning disabled (LD) children, and analyzed are the Basic Educational Skills Inventory (BESI), the Prescriptive Reading Inventory (PRI), and the Cooper-McGuire Diagnostic Work-Analysis Test (CooperMcGuire). Criterion referenced tests are defined; and problems in…

  6. Fulfillment of the kinetic Bohm criterion in a quasineutral particle-in-cell model

    International Nuclear Information System (INIS)

    Ahedo, Eduardo; Santos, Robert; Parra, Felix I.

    2010-01-01

    Quasineutral particle-in-cell models of ions must fulfill the kinetic Bohm criterion, in its inequality form, at the domain boundary in order to match correctly with solutions of the Debye sheaths tied to the walls. The simple, fluid form of the Bohm criterion is shown to be a bad approximation of the exact, kinetic form when the ion velocity distribution function has a significant dispersion and involves different charge numbers. The fulfillment of the Bohm criterion is measured by a weighting algorithm at the boundary, but linear weighting algorithms have difficulties to reproduce the nonlinear behavior around the sheath edge. A surface weighting algorithm with an extended temporal weighting is proposed and shown to behave better than the standard volumetric weighting. Still, this must be supplemented by a forcing algorithm of the kinetic Bohm criterion. This postulates a small potential fall in a supplementary, thin, transition layer. The electron-wall interaction is shown to be of little relevance in the fulfillment of the Bohm criterion.

  7. The Anti-Resonance Criterion in Selecting Pick Systems for Fully Operational Cutting Machinery Used in Mining

    Science.gov (United States)

    Cheluszka, Piotr

    2017-12-01

    This article discusses the issue of selecting a pick system for cutting mining machinery, concerning the reduction of vibrations in the cutting system, particularly in a load-carrying structure at work. Numerical analysis was performed on a telescopic roadheader boom equipped with transverse heads. A frequency range of the boom's free vibrations with a set structure and dynamic properties were determined based on a dynamic model. The main components excited by boom vibrations, generated through the process of cutting rock, were identified. This was closely associated with the stereometry of the cutting heads. The impact on the pick system (the number of picks and their arrangement along the side of the cutting head) was determined by the intensity of the external boom load elements, especially in resonance zones. In terms of the anti-resonance criterion, an advantageous system of cutting head picks was determined as a result of the analysis undertaken. The correct selection of the pick system was ascertained based on a computer simulation of the dynamic loads and vibrations of a roadheader telescopic boom.

  8. Ginsburg criterion for an equilibrium superradiant model in the dynamic approach

    International Nuclear Information System (INIS)

    Trache, M.

    1991-10-01

    Some critical properties of an equilibrium superradiant model are discussed, taking into account the quantum fluctuations of the field variables. The critical region is calculated using the Ginsburg criterion, underlining the role of the atomic concentration as a control parameter of the phase transition. (author). 16 refs, 1 fig

  9. 34 CFR 389.30 - What additional selection criterion is used under this program?

    Science.gov (United States)

    2010-07-01

    ... improve the competence of professional and other personnel in the rehabilitation agencies serving... criterion is used under this program? In addition to the criteria in 34 CFR 385.31(c), the Secretary uses...-Federal rehabilitation service program. (1) The Secretary reviews each application for information that...

  10. Selection criterion of stable dendritic growth at arbitrary Péclet numbers with convection.

    Science.gov (United States)

    Alexandrov, Dmitri V; Galenko, Peter K

    2013-06-01

    A free dendrite growth under forced fluid flow is analyzed for solidification of a nonisothermal binary system. Using an approach to dendrite growth developed by Bouissou and Pelcé [Phys. Rev. A 40, 6673 (1989)], the analysis is presented for the parabolic dendrite interface with small anisotropy of surface energy growing at arbitrary Péclet numbers. The stable growth mode is obtained from the solvability condition giving the stability criterion for the dendrite tip velocity V and dendrite tip radius ρ as a function of the growth Péclet number, flow Péclet number, and Reynolds number. In limiting cases, the obtained stability criterion presents known criteria for small and high growth Péclet numbers of the solidifying system with and without convective fluid flow.

  11. MATHEMATICAL MODEL OF INTEGRAL CRITERION OF COMPETITION POTENTIAL OF MARITIME-RIVER HIGHER EDUCATIONAL ESTABLISHMENT.

    OpenAIRE

    Y.G. Yakusevich; L.D. Gerganov

    2012-01-01

    The competitive potential (CP) of maritime-river higher educational establishment in the conditions of a modern market of educational service is analyzed. The model of strategic resources (SR) is formalized. The mathematical model of an integral criterion of the competitive potential of higher educational establishment on the basis of Guermeyer’s method is built. It is proved that the discreteness of competitive edges is a reason of the formation of fuzzy resources and requires the cons...

  12. An improved procedure for gene selection from microarray experiments using false discovery rate criterion

    Directory of Open Access Journals (Sweden)

    Yang Mark CK

    2006-01-01

    Full Text Available Abstract Background A large number of genes usually show differential expressions in a microarray experiment with two types of tissues, and the p-values of a proper statistical test are often used to quantify the significance of these differences. The genes with small p-values are then picked as the genes responsible for the differences in the tissue RNA expressions. One key question is what should be the threshold to consider the p-values small. There is always a trade off between this threshold and the rate of false claims. Recent statistical literature shows that the false discovery rate (FDR criterion is a powerful and reasonable criterion to pick those genes with differential expression. Moreover, the power of detection can be increased by knowing the number of non-differential expression genes. While this number is unknown in practice, there are methods to estimate it from data. The purpose of this paper is to present a new method of estimating this number and use it for the FDR procedure construction. Results A combination of test functions is used to estimate the number of differentially expressed genes. Simulation study shows that the proposed method has a higher power to detect these genes than other existing methods, while still keeping the FDR under control. The improvement can be substantial if the proportion of true differentially expressed genes is large. This procedure has also been tested with good results using a real dataset. Conclusion For a given expected FDR, the method proposed in this paper has better power to pick genes that show differentiation in their expression than two other well known methods.

  13. Continuous-Time Portfolio Selection and Option Pricing under Risk-Minimization Criterion in an Incomplete Market

    Directory of Open Access Journals (Sweden)

    Xinfeng Ruan

    2013-01-01

    Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.

  14. MATHEMATICAL MODEL OF INTEGRAL CRITERION OF COMPETITION POTENTIAL OF MARITIME-RIVER HIGHER EDUCATIONAL ESTABLISHMENT.

    Directory of Open Access Journals (Sweden)

    Y.G. Yakusevich

    2012-07-01

    Full Text Available The competitive potential (CP of maritime-river higher educational establishment in the conditions of a modern market of educational service is analyzed. The model of strategic resources (SR is formalized. The mathematical model of an integral criterion of the competitive potential of higher educational establishment on the basis of Guermeyer’s method is built. It is proved that the discreteness of competitive edges is a reason of the formation of fuzzy resources and requires the construction of the functions belonging to competition potential of higher educational establishment.

  15. The effect of using a robust optimality criterion in model based adaptive optimization.

    Science.gov (United States)

    Strömberg, Eric A; Hooker, Andrew C

    2017-08-01

    Optimizing designs using robust (global) optimality criteria has been shown to be a more flexible approach compared to using local optimality criteria. Additionally, model based adaptive optimal design (MBAOD) may be less sensitive to misspecification in the prior information available at the design stage. In this work, we investigate the influence of using a local (lnD) or a robust (ELD) optimality criterion for a MBAOD of a simulated dose optimization study, for rich and sparse sampling schedules. A stopping criterion for accurate effect prediction is constructed to determine the endpoint of the MBAOD by minimizing the expected uncertainty in the effect response of the typical individual. 50 iterations of the MBAODs were run using the MBAOD R-package, with the concentration from a one-compartment first-order absorption pharmacokinetic model driving the population effect response in a sigmoidal EMAX pharmacodynamics model. The initial cohort consisted of eight individuals in two groups and each additional cohort added two individuals receiving a dose optimized as a discrete covariate. The MBAOD designs using lnD and ELD optimality with misspecified initial model parameters were compared by evaluating the efficiency relative to an lnD-optimal design based on the true parameter values. For the explored example model, the MBAOD using ELD-optimal designs converged quicker to the theoretically optimal lnD-optimal design based on the true parameters for both sampling schedules. Thus, using a robust optimality criterion in MBAODs could reduce the number of adaptations required and improve the practicality of adaptive trials using optimal design.

  16. Experiments and modeling of ballistic penetration using an energy failure criterion

    Directory of Open Access Journals (Sweden)

    Dolinski M.

    2015-01-01

    Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the

  17. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  18. Building a maintenance policy through a multi-criterion decision-making model

    Science.gov (United States)

    Faghihinia, Elahe; Mollaverdi, Naser

    2012-08-01

    A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.

  19. The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis

    Science.gov (United States)

    Kaufmann, Esther; Wittmann, Werner W.

    2016-01-01

    The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge’s level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models. PMID:27327085

  20. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  1. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online.......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve...... this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion...

  2. Long-term selection using a single trait criterion, non-destructive deformation, in White Leghorns: Effect over time on genetic parameters for traits related to egg production.

    Science.gov (United States)

    Gervais, Olivier; Nirasawa, Keijiro; Vincenot, Christian E; Nagamine, Yoshitaka; Moriya, Kazuyuki

    2017-02-01

    Although non-destructive deformation is relevant for assessing eggshell strength, few long-term selection experiments are documented which use non-destructive deformation as a selection criterion. This study used restricted maximum likelihood-based methods with a four-trait animal model to analyze the effect of non-destructive deformation on egg production, egg weight and sexual maturity in a two-way selection experiment involving 17 generations of White Leghorns. In the strong shell line, corresponding to the line selected for low non-destructive deformation values, the heritability estimates were 0.496 for non-destructive deformation, 0.253 for egg production, 0.660 for egg weight and 0.446 for sexual maturity. In the weak shell line, corresponding to the line selected for high non-destructive deformation values, the heritabilities were 0.372, 0.162, 0.703 and 0.404, respectively. An asymmetric response to selection was observed for non-destructive deformation, egg production and sexual maturity, whereas egg weight decreased for both lines. Using non-destructive deformation to select for stronger eggshell had a small negative effect on egg production and sexual maturity, suggesting the need for breeding programs to balance selection between eggshell traits and egg production traits. However, the analysis of the genetic correlation between non-destructive deformation and egg weight revealed that large eggs are not associated with poor eggshell quality. © 2016 Japanese Society of Animal Science.

  3. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Science.gov (United States)

    2010-07-01

    .... (1) The Secretary reviews each application for information that shows that the need for the in... and can be expected to improve the competence of all State vocational rehabilitation personnel in... REHABILITATION UNIT IN-SERVICE TRAINING How Does the Secretary Make an Award? § 388.20 What additional selection...

  4. Key Determinant Derivations for Information Technology Disaster Recovery Site Selection by the Multi-Criterion Decision Making Method

    Directory of Open Access Journals (Sweden)

    Chia-Lee Yang

    2015-05-01

    Full Text Available Disaster recovery sites are an important mechanism in continuous IT system operations. Such mechanisms can sustain IT availability and reduce business losses during natural or human-made disasters. Concerning the cost and risk aspects, the IT disaster-recovery site selection problems are multi-criterion decision making (MCDM problems in nature. For such problems, the decision aspects include the availability of the service, recovery time requirements, service performance, and more. The importance and complexities of IT disaster recovery sites increases with advances in IT and the categories of possible disasters. The modern IT disaster recovery site selection process requires further investigation. However, very few researchers tried to study related issues during past years based on the authors’ extremely limited knowledge. Thus, this paper aims to derive the aspects and criteria for evaluating and selecting a modern IT disaster recovery site. A hybrid MCDM framework consisting of the Decision Making Trial and Evaluation Laboratory (DEMATEL and the Analytic Network Process (ANP will be proposed to construct the complex influence relations between aspects as well as criteria and further, derive weight associated with each aspect and criteria. The criteria with higher weight can be used for evaluating and selecting the most suitable IT disaster recovery sites. In the future, the proposed analytic framework can be used for evaluating and selecting a disaster recovery site for data centers by public institutes or private firms.

  5. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  6. Heuristics and Criterion Setting during Selective Encoding in Visual Decision-Making: Evidence from Eye Movements.

    Science.gov (United States)

    Schotter, Elizabeth R; Gerety, Cainen; Rayner, Keith

    2012-01-01

    When making a decision, people spend longer looking at the option they ultimately choose compared other options-termed the gaze bias effect-even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, on-line during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgments about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same color content (e.g., both in color or both in black-and-white) or whether they differed in color content and the extent to which color content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the color content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information on-line.

  7. The latent structure of personality functioning: Investigating criterion a from the alternative model for personality disorders in DSM-5.

    Science.gov (United States)

    Zimmermann, Johannes; Böhnke, Jan R; Eschstruth, Rhea; Mathews, Alessa; Wenzel, Kristin; Leising, Daniel

    2015-08-01

    The alternative model for the classification of personality disorders (PD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III comprises 2 major components: impairments in personality functioning (Criterion A) and maladaptive personality traits (Criterion B). In this study, we investigated the latent structure of Criterion A (a) within subdomains, (b) across subdomains, and (c) in conjunction with the Criterion B trait facets. Data were gathered as part of an online study that collected other-ratings by 515 laypersons and 145 therapists. Laypersons were asked to assess 1 of their personal acquaintances, whereas therapists were asked to assess 1 of their patients, using 135 items that captured features of Criteria A and B. We were able to show that (a) the structure within the Criterion A subdomains can be appropriately modeled using generalized graded unfolding models, with results suggesting that the items are indeed related to common underlying constructs but often deviate from their theoretically expected severity level; (b) the structure across subdomains is broadly in line with a model comprising 2 strongly correlated factors of self- and interpersonal functioning, with some notable deviations from the theoretical model; and (c) the joint structure of the Criterion A subdomains and the Criterion B facets broadly resembles the expected model of 2 plus 5 factors, albeit the loading pattern suggests that the distinction between Criteria A and B is somewhat blurry. Our findings provide support for several major assumptions of the alternative DSM-5 model for PD but also highlight aspects of the model that need to be further refined. (c) 2015 APA, all rights reserved).

  8. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  9. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion.

    Science.gov (United States)

    Imran, Muhammad; Kühbach, Markus; Roters, Franz; Bambach, Markus

    2017-11-02

    Dynamic recrystallization (DRX) processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC) by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  10. An integrative analysis of reprogramming in human isogenic system identified a clone selection criterion.

    Science.gov (United States)

    Shutova, Maria V; Surdina, Anastasia V; Ischenko, Dmitry S; Naumov, Vladimir A; Bogomazova, Alexandra N; Vassina, Ekaterina M; Alekseev, Dmitry G; Lagarkova, Maria A; Kiselev, Sergey L

    2016-01-01

    The pluripotency of newly developed human induced pluripotent stem cells (iPSCs) is usually characterized by physiological parameters; i.e., by their ability to maintain the undifferentiated state and to differentiate into derivatives of the 3 germ layers. Nevertheless, a molecular comparison of physiologically normal iPSCs to the "gold standard" of pluripotency, embryonic stem cells (ESCs), often reveals a set of genes with different expression and/or methylation patterns in iPSCs and ESCs. To evaluate the contribution of the reprogramming process, parental cell type, and fortuity in the signature of human iPSCs, we developed a complete isogenic reprogramming system. We performed a genome-wide comparison of the transcriptome and the methylome of human isogenic ESCs, 3 types of ESC-derived somatic cells (fibroblasts, retinal pigment epithelium and neural cells), and 3 pairs of iPSC lines derived from these somatic cells. Our analysis revealed a high input of stochasticity in the iPSC signature that does not retain specific traces of the parental cell type and reprogramming process. We showed that 5 iPSC clones are sufficient to find with 95% confidence at least one iPSC clone indistinguishable from their hypothetical isogenic ESC line. Additionally, on the basis of a small set of genes that are characteristic of all iPSC lines and isogenic ESCs, we formulated an approach of "the best iPSC line" selection and confirmed it on an independent dataset.

  11. Bacterial selection for biological control of plant disease: criterion determination and validation

    Directory of Open Access Journals (Sweden)

    Monalize Salete Mota

    Full Text Available Abstract This study aimed to evaluate the biocontrol potential of bacteria isolated from different plant species and soils. The production of compounds related to phytopathogen biocontrol and/or promotion of plant growth in bacterial isolates was evaluated by measuring the production of antimicrobial compounds (ammonia and antibiosis and hydrolytic enzymes (amylases, lipases, proteases, and chitinases and phosphate solubilization. Of the 1219 bacterial isolates, 92% produced one or more of the eight compounds evaluated, but only 1% of the isolates produced all the compounds. Proteolytic activity was most frequently observed among the bacterial isolates. Among the compounds which often determine the success of biocontrol, 43% produced compounds which inhibit mycelial growth of Monilinia fructicola, but only 11% hydrolyzed chitin. Bacteria from different plant species (rhizosphere or phylloplane exhibited differences in the ability to produce the compounds evaluated. Most bacterial isolates with biocontrol potential were isolated from rhizospheric soil. The most efficient bacteria (producing at least five compounds related to phytopathogen biocontrol and/or plant growth, 86 in total, were evaluated for their biocontrol potential by observing their ability to kill juvenile Mesocriconema xenoplax. Thus, we clearly observed that bacteria that produced more compounds related to phytopathogen biocontrol and/or plant growth had a higher efficacy for nematode biocontrol, which validated the selection strategy used.

  12. Criterion for the simultaneous selection of a working correlation structure and either generalized estimating equations or the quadratic inference function approach.

    Science.gov (United States)

    Westgate, Philip M

    2014-05-01

    Generalized estimating equations (GEE) are commonly used for the marginal analysis of correlated data, although the quadratic inference function (QIF) approach is an alternative that is increasing in popularity. This method optimally combines distinct sets of unbiased estimating equations that are based upon a working correlation structure, therefore asymptotically increasing or maintaining estimation efficiency relative to GEE. However, in finite samples, additional estimation variability arises when combining these sets of estimating equations, and therefore the QIF approach is not guaranteed to work as well as GEE. Furthermore, estimation efficiency can be improved for both analysis methods by accurate modeling of the correlation structure. Our goal is to improve parameter estimation, relative to existing methods, by simultaneously selecting a working correlation structure and choosing between GEE and two versions of the QIF approach. To do this, we propose the use of a criterion based upon the trace of the empirical covariance matrix (TECM). To make GEE and both QIF versions directly comparable for any given working correlation structure, the proposed TECM utilizes a penalty to account for the finite-sample variance inflation that can occur with either version of the QIF approach. Via a simulation study and in application to a longitudinal study, we show that penalizing the variance inflation that occurs with the QIF approach is necessary and that the proposed criterion works very well. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A model expansion criterion for treating surface topography in ray path calculations using the eikonal equation

    International Nuclear Information System (INIS)

    Ma, Ting; Zhang, Zhongjie

    2014-01-01

    Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)

  14. APPLYING THE EFQM EXCELLENCE MODEL AT THE GERMAN STUDY LINE WITH FOCUS ON THE CRITERION

    Directory of Open Access Journals (Sweden)

    ILIES LIVIU

    2013-07-01

    Full Text Available This article presents a stage of the implementation process of the EFQM Model in a higher education institution, namely at the German study line within the Faculty of Economics and Business Administration, “Babeș - Bolyai” University, Cluj –Napoca. Actually, designing this model for the higher education sector means highlighting the basis for the implementation of a Total Quality Management model, seen as a holistic dimension for the perception of quality in an organization. By means of the EFQM method, the authors try to identify the performance degree of the criterion ,,Customer Results”, related to the students’ satisfaction level. The students are seen as primary customers of the higher education sector and have an essential role in defining the quality dimensions. On the one hand, the customers of the higher education sector can surface the status quo of the quality in the institution and on the other hand they can improve the quality. Actually, the continuous improvement of quality is highly linked to performance. From this point of view, the European Foundation for Quality Management model is a practical tool in order to support the analysis of the opportunities within higher education institutions. Therefore, this model offers a customer focused approach, because many higher education institutions consider the students to be the heart of teaching and researching. Further, the fundamental concepts are defined and the focus is pointed in the direction of customer approach, which highlight the idea that excellence is creating added value for customers. Anticipating and identifying the current and the future needs of the students by developing a balanced range of relevant dimensions and indicators means taking an appropriate action based on the holistic view of quality in an organization. Focusing and understanding students’ and other customers’ requirements, their needs and expectations, follows the idea that performance can

  15. Criterion for the selection of a system of treatment of residues and their application to the wine of the alcoholic industry

    International Nuclear Information System (INIS)

    Caicedo M, Luis Alfonso; Fonseca, Jose Joaquin; Rodriguez, Gerardo

    1996-01-01

    The selection of a system of residues treatment should follow the criterion of the process denominated BATEA (better available and feasible technical and economical process). Because their application is difficult for not having objective parameters of evaluation, a method is presented that classifies the evaluation criterions in general and specific. For the quantification of these aspects factors like FQO are used, FCI, FTR, FD and the factor of applicability of the treatment (FAT). The method applied to the wine; allows concluding that it is the evaporation the best treatment system for this process, while other systems are not developed or it increases the recompensing rate

  16. A focused information criterion for graphical models in fMRI connectivity with high-dimensional data

    NARCIS (Netherlands)

    Pircalabelu, E.; Claeskens, G.; Jahfari, S.; Waldorp, L.J.

    2015-01-01

    Connectivity in the brain is the most promising approach to explain human behavior. Here we develop a focused information criterion for graphical models to determine brain connectivity tailored to specific research questions. All efforts are concentrated on high-dimensional settings where the number

  17. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  18. Electronic Devices, Methods, and Computer Program Products for Selecting an Antenna Element Based on a Wireless Communication Performance Criterion

    DEFF Research Database (Denmark)

    2014-01-01

    A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception and trans...

  19. Crack initiation at a V-notch—comparison between a brittle fracture criterion and the Dugdale cohesive model

    Science.gov (United States)

    Henninger, Carole; Leguillon, Dominique; Martin, Eric

    2007-07-01

    The cohesive zone models are an alternative to fracture criteria for the prediction of crack initiation at stress concentration points in brittle materials. We propose here a comparison between the so-called mixed criterion involving a twofold condition (energy and stress) and the Dugdale cohesive model. The predictions of the critical load leading to failure are in perfect agreement and both models conclude that the initial process is unstable except in case of a pre-existing crack. To cite this article: C. Henninger et al., C. R. Mecanique 335 (2007).

  20. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  1. Comparison of Some Estimators under the Pitman’s Closeness Criterion in Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jibo Wu

    2014-01-01

    Full Text Available Batah et al. (2009 combined the unbiased ridge estimator and principal components regression estimator and introduced the modified r-k class estimator. They also showed that the modified r-k class estimator is superior to the ordinary least squares estimator and principal components regression estimator in the mean squared error matrix. In this paper, firstly, we will give a new method to obtain the modified r-k class estimator; secondly, we will discuss its properties in some detail, comparing the modified r-k class estimator to the ordinary least squares estimator and principal components regression estimator under the Pitman closeness criterion. A numerical example and a simulation study are given to illustrate our findings.

  2. Variation in lipid extractability by solvent in microalgae. Additional criterion for selecting species and strains for biofuel production from microalgae.

    Science.gov (United States)

    Mendoza, Héctor; Carmona, Laura; Assunção, Patricia; Freijanes, Karen; de la Jara, Adelina; Portillo, Eduardo; Torres, Alicia

    2015-12-01

    The lipid extractability of 14 microalgae species and strains was assessed using organic solvents (methanol and chloroform). The high variability detected indicated the potential for applying this parameter as an additional criterion for microalgae screening in industrial processes such as biofuel production from microalgae. Species without cell walls presented higher extractability than species with cell walls. Analysis of cell integrity by flow cytometry and staining with propidium iodide showed a significant correlation between higher resistance to the physical treatments of cell rupture by sonication and the lipid extractability of the microalgae. The results highlight the cell wall as a determining factor in the inter- and intraspecific variability in lipid extraction treatments. Copyright © 2015. Published by Elsevier Ltd.

  3. [X-ray evaluation of renal function in children with hydronephrosis as a criterion in the selection of therapeutic tactics].

    Science.gov (United States)

    Bosin, V Iu; Murvanidze, D D; Sturua, D G; Nabokov, A K; Soloshenko, V N

    1989-01-01

    The anatomic parameters of the kidneys and the rate of glomerular filtration were measured in 77 children with unilateral hydronephrosis and in 27 children with nonobstructive diseases of the urinary tract according to the clearance of an opaque medium during excretory urography. Alterations in the anatomic parameters of the kidneys in obstructive affection did not reflect the gravity of functional disorders. It has been established that there is a possibility of carrying out a separate assessment of filtration function of the hydronephrotic and contralateral kidneys. A new diagnostic criterion is offered, namely an index of relative clearance, which enables one to measure the degree of compensatory phenomena in the preserved glomeruli and the extent of sclerotic process. It has been demonstrated that accurate measurement of the functional parameters of the affected kidney should underlie the treatment choice in children with unilateral hydronephrosis.

  4. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  5. An Analysis of Depot Repair Capacity as a Criterion in Transportation Mode Selection in the Retrograde Movement of Reparable Assets

    National Research Council Canada - National Science Library

    Kahler, Harold

    2004-01-01

    .... Mode selection is based on the asset. Focusing on the asset and moving it quickly is an efficient and effective method of getting assets to where they are needed in a timely manner in the forward portion of the supply pipeline...

  6. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  7. Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework

    Science.gov (United States)

    Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.

    2017-12-01

    The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and

  8. Criterion of applicable models for planar type Cherenkov laser based on quantum mechanical treatments

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Minoru [Faculty of Electrical and Computer Engineering, Institute of Science and Engineering Kanazawa University, Kakuma-machi, Kanazawa 920-1192 (Japan); Fares, Hesham, E-mail: fares_fares4@yahoo.com [Faculty of Electrical and Computer Engineering, Institute of Science and Engineering Kanazawa University, Kakuma-machi, Kanazawa 920-1192 (Japan); Department of Physics, Faculty of Science, Assiut University, Assiut 71516 (Egypt)

    2013-05-01

    A generalized theoretical analysis for amplification mechanism in the planar-type Cherenkov laser is given. An electron is represented to be a material wave having temporal and spatial varying phases with finite spreading length. Interaction between the electrons and the electromagnetic (EM) wave is analyzed by counting the quantum statistical properties. The interaction mechanism is classified into the Velocity and Density Modulation (VDM) model and the Energy Level Transition (ELT) model basing on the relation between the wavelength of the EM wave and the electron spreading length. The VDM model is applicable when the wavelength of the EM wave is longer than the electron spreading length as in the microwave region. The dynamic equation of the electron, which is popularly used in the classical Newtonian mechanics, has been derived from the quantum mechanical Schrödinger equation. The amplification of the EM wave can be explained basing on the bunching effect of the electron density in the electron beam. The amplification gain and whose dispersion relation with respect to the electron velocity is given in this paper. On the other hand, the ELT model is applicable for the case that the wavelength of the EM wave is shorter than the electron spreading length as in the optical region. The dynamics of the electron is explained to be caused by the electron transition between different energy levels. The amplification gain and whose dispersion relation with respect to the electron acceleration voltage was derived on the basis of the quantum mechanical density matrix.

  9. Multi-criterion model ensemble of CMIP5 surface air temperature over China

    Science.gov (United States)

    Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming

    2017-05-01

    The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the

  10. A potential new selection criterion for breeding winter barley optimal protein and amino acid profiles for liquid pig feed

    DEFF Research Database (Denmark)

    Christensen, Jesper Bjerg; Blaabjerg, Karoline; Poulsen, Hanne Damgaard

    of glutamic acid revealed differences between the cultivars and the solubilised protein at all three times. These preliminary results may indicate that improvements of the nitrogen utilization in pigs fed soaked winter barley depends on the choice of cultivar and soaking time, and may serve as a new selection......The hypothesis is that cereal proteases in liquid feed degrade and convert water insoluble storage protein into water soluble protein, which may improve the digestibility of protein in pigs compared with dry feeding. Protein utilization is increased by matching the amino acid (AAs) content...... of the diet as close as possible to the pigs’ requirement. By improving the availability of isoleucine, leucine, histidine and phenylalanine, which are limiting and commercial unavailable, the amount of crude protein in the pig feed can be reduced, resulting in a decreased excretion of nitrogen. The aim...

  11. Seasonality of mean and heavy precipitation in the area of the Vosges Mountains: dependence on the selection criterion

    Czech Academy of Sciences Publication Activity Database

    Minářová, J.; Müller, Miloslav; Clappier, A.

    2017-01-01

    Roč. 37, č. 5 (2017), s. 2654-2666 ISSN 0899-8418 Institutional support: RVO:68378289 Keywords : climate extremes * model * variability * events * statistics * weather * cops * Vosges Mountains * seasonality * annual course * extreme * heavy rainfall * precipitation * POT * GEV Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.760, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/joc.4871/abstract

  12. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  13. Do candidate reactions relate to job performance or affect criterion-related validity? A multistudy investigation of relations among reactions, selection test scores, and job performance.

    Science.gov (United States)

    McCarthy, Julie M; Van Iddekinge, Chad H; Lievens, Filip; Kung, Mei-Chuan; Sinar, Evan F; Campion, Michael A

    2013-09-01

    Considerable evidence suggests that how candidates react to selection procedures can affect their test performance and their attitudes toward the hiring organization (e.g., recommending the firm to others). However, very few studies of candidate reactions have examined one of the outcomes organizations care most about: job performance. We attempt to address this gap by developing and testing a conceptual framework that delineates whether and how candidate reactions might influence job performance. We accomplish this objective using data from 4 studies (total N = 6,480), 6 selection procedures (personality tests, job knowledge tests, cognitive ability tests, work samples, situational judgment tests, and a selection inventory), 5 key candidate reactions (anxiety, motivation, belief in tests, self-efficacy, and procedural justice), 2 contexts (industry and education), 3 continents (North America, South America, and Europe), 2 study designs (predictive and concurrent), and 4 occupational areas (medical, sales, customer service, and technological). Consistent with previous research, candidate reactions were related to test scores, and test scores were related to job performance. Further, there was some evidence that reactions affected performance indirectly through their influence on test scores. Finally, in no cases did candidate reactions affect the prediction of job performance by increasing or decreasing the criterion-related validity of test scores. Implications of these findings and avenues for future research are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved

  14. Quality Practices, Corporate Social Responsibility and the “Society Results” Criterion of the EFQM Model

    Directory of Open Access Journals (Sweden)

    María de la Cruz del Río-Rama

    2017-04-01

    Full Text Available Purpose – The purpose of this research is to analyze whether quality management practices implemented and carried out by the rural accommodation establishments under study influence society results obtained by organizations, which are understood as the participation therein and the development of local community.´ Design/methodology/approach – The working methodology consists of carrying out an exploratory and confirmatory factor analysis in order to test the psychometric properties of measurement scales, and the hypothesized relationships between critical factors and society results are examined using structural equation modeling. Findings – The study provides evidence of a weak relationship between the critical factors of quality and society results in rural accommodation establishments. The results suggest that process management is the only quality practice that has a direct effect on society results and the rest of the critical factors are considered antecedents of it. Originality/value – The contribution of this study, which explores the impact of the critical factors of quality on society results, is to confirm that there is an effect of the critical factors of quality on society results (social and environmental responsibilities through the direct relationship of process management. Very few studies examine this relationship.

  15. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  16. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  17. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  18. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  19. Influence of value of the criterion of blunting on the selection of the brand of hard alloy and optimum cutting speed when turning austenitic steel

    Directory of Open Access Journals (Sweden)

    Lipatov Andrew A.

    2017-01-01

    Full Text Available A mechanisms of wear carbide tool when turning of austenitic steel 18-8 by cutters from firm hard-alloys of various groups (WC-Co, TiC-WC-Co, TiC-TaC-WC-Co were studied. During wear resistant tests it is established, that the prevalence of one of the two mechanisms of wear (of adhesion-fatigue or dif-fusional depends on the brand of hard alloy. It is shown, that in machining by titanium-containing carbide tool the intensity of the growth of the wear platform on the back surface of the tool as wear changes. This is due to the smooth transition from the predominance of adhesion-fatigue wear to prevalence of diffusional wear. Therefore, the intensity of wear should be considered as current, depending on the value wear platform, and value of the criterion of blunting is influence on the selection of the brand of hard-alloy and optimum cutting speed.

  20. Selectivity criterion for pyrazolo[3,4-b]pyrid[az]ine derivatives as GSK-3 inhibitors: CoMFA and molecular docking studies.

    Science.gov (United States)

    Patel, Dhilon S; Bharatam, Prasad V

    2008-05-01

    In the development of drugs targeted for GSK-3, its selective inhibition is an important requirement owing to the possibility of side effects arising from other kinases for the treatment of diabetes mellitus. A three-dimensional quantitative structure-activity relationship study (3D-QSAR) has been carried out on a set of pyrazolo[3,4-b]pyrid[az]ine derivatives, which includes non-selective and selective GSK-3 inhibitors. The CoMFA models were derived from a training set of 59 molecules. A test set containing 14 molecules (not used in model generation) was used to validate the CoMFA models. The best CoMFA model generated by applying leave-one-out (LOO) cross-validation study gave cross-validation r(cv)(2) and conventional r(conv)(2) values of 0.60 and 0.97, respectively, and r(pred)(2) value of 0.55, which provide the predictive ability of model. The developed models well explain (i) the observed variance in the activity and (ii) structural difference between the selective and non-selective GSK-3 inhibitors. Validation based on the molecular docking has also been carried out to explain the structural differences between the selective and non-selective molecules in the given series of molecules.

  1. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  2. Onset of slugging criterion based on singular points and stability analyses of transient one-dimensional two-phase flow equations of two-fluid model

    International Nuclear Information System (INIS)

    Sung, Chang Kyung; Chun, Moon Hyun

    1996-01-01

    A two-step approach has been used to obtain a new criterion for the onset of slug formation : (1) In the first step, a more general expression than the existing models for the onset of slug flow criterion has been derived from the analysis of singular points and neutral stability conditions of the transient one-dimensional two-phase flow equations of two-fluid model. (2) In the second step, introducing simplifications and incorporating a parameter into the general expression obtained in the first step to satisfy a number of physical conditions a priori specified, a new simple criterion for the onset of slug flow has been derived. Comparisons of the present model with existing models and experimental data show that the present model agree very closely with Taitel and Dukler's model and experimental data in horizontal pipes. In an inclined pipe (θ=50 deg ), however, the difference between the predictions of the present model and those of existing models is appreciably large and the present model gives the best agreement with Ohnuki et al.'s data. 17 refs., 5 figs., 1 tab. (author)

  3. A New Infrared Color Criterion for the Selection of 0 < z < 7 AGNs: Application to Deep Fields and Implications for JWST Surveys

    Science.gov (United States)

    Messias, H.; Afonso, J.; Salvato, M.; Mobasher, B.; Hopkins, A. M.

    2012-08-01

    It is widely accepted that observations at mid-infrared (mid-IR) wavelengths enable the selection of galaxies with nuclear activity, which may not be revealed even in the deepest X-ray surveys. Many mid-IR color-color criteria have been explored to accomplish this goal and tested thoroughly in the literature. Besides missing many low-luminosity active galactic nuclei (AGNs), one of the main conclusions is that, with increasing redshift, the contamination by non-active galaxies becomes significant (especially at z >~ 2.5). This is problematic for the study of the AGN phenomenon in the early universe, the main goal of many of the current and future deep extragalactic surveys. In this work new near- and mid-IR color diagnostics are explored, aiming for improved efficiency—better completeness and less contamination—in selecting AGNs out to very high redshifts. We restrict our study to the James Webb Space Telescope wavelength range (0.6-27 μm). The criteria are created based on the predictions by state-of-the-art galaxy and AGN templates covering a wide variety of galaxy properties, and tested against control samples with deep multi-wavelength coverage (ranging from the X-rays to radio frequencies). We show that the colors Ks - [4.5], [4.5] - [8.0], and [8.0] - [24] are ideal as AGN/non-AGN diagnostics at, respectively, z ~ 2.5-3. However, when the source redshift is unknown, these colors should be combined. We thus develop an improved IR criterion (using Ks and IRAC bands, KI) as a new alternative at z 50%-90% level of successful AGN selection). We also propose KIM (using Ks , IRAC, and MIPS 24 μm bands, KIM), which aims to select AGN hosts from local distances to as far back as the end of reionization (0 ~ 2.5. Overall, KIM shows a ~30%-40% completeness and a >70%-90% level of successful AGN selection. KI and KIM are built to be reliable against a ~10%-20% error in flux, are based on existing filters, and are suitable for immediate use.

  4. Modeling the Impact of Test Anxiety and Test Familiarity on the Criterion-Related Validity of Cognitive Ability Tests

    Science.gov (United States)

    Reeve, Charlie L.; Heggestad, Eric D.; Lievens, Filip

    2009-01-01

    The assessment of cognitive abilities, whether it is for purposes of basic research or applied decision making, is potentially susceptible to both facilitating and debilitating influences. However, relatively little research has examined the degree to which these factors might moderate the criterion-related validity of cognitive ability tests. To…

  5. Proposing a model for safety risk assessment in the construction industry using gray multi-criterion decision-making

    Directory of Open Access Journals (Sweden)

    S. M. Abootorabi

    2014-09-01

    Full Text Available Introduction: Statistical Report of the Social Security Organization indicate that among the various industries, the construction industry has the highest number of work-related accidents so that in addition to frequency, it has high intensity, as well. On the other hand, a large number of human resources are working in this whish shows they necessity for paying special attention to these workers. Therefore, risk assessment of the safety in the construction industry is an effective step in this regard. In this study, a method for ranking safety risks in conditions of low number of samples and uncertainty is presented, using gray multi-criterion decision-making. .Material and Method: In this study, we first identified the factors affecting the occurrence of hazards in the construction industry. Then, appropriate for ranking the risks were determined and the problem was defined as a multi-criterion decision-making. In order to weight the criteria and to evaluate alternatives based on each criterion, gray numbers were used. In the last stage, the problem was solved using the gray possibility degree. .Results: The results show that the method of gray multi-criterion decision-making is an effective method for ranking risks in situations of low samples compared with other methods of MCDM. .Conclusion: The proposed method is preferred to fuzzy methods and statistics in uncertain and low sample size, due to simple calculations and no need to define the membership function.

  6. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  7. On the Jeans criterion

    International Nuclear Information System (INIS)

    Whitworth, A.P.

    1980-01-01

    The Jeans criterion is first stated and distinguished from the Virial Theorem. Then it is discussed how the Jeans criterion can be derived from the Virial Theorem and the inherent shortcomings in this derivation. Finally, it is indicated how these shortcomings might be overcome. The Jeans criterion is a fragmentation - or condensation -criterion. An expression is given, connecting the fragmentation of an unstable extended medium into masses Msub(J). Rather than picturing the background medium fragmenting, it is probably more appropriate to envisage these masses Msub(J) 'condensing' out of the background medium. On the condensation picture some fraction of the background material separates out into coherent bound nodules under the pull of its self-gravity. For this reason the Jeans criterion is discussed as a condensation condition, reserving the term fragmentation for a different process. The Virial Theorem provides a contraction criterion. This is described with reference to a spherical cloud and is developed to derive the Jeans criterion. (U.K.)

  8. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  9. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  10. Critical Length Criterion and the Arc Chain Model for Calculating the Arcing Time of the Secondary Arc Related to AC Transmission Lines

    International Nuclear Information System (INIS)

    Cong Haoxi; Li Qingmin; Xing Jinyuan; Li Jinsong; Chen Qiang

    2015-01-01

    The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. (paper)

  11. PAIRWISE ASSOCIATION AS A CRITERION FOR THE SELECTION OF COLLECTION SITES OF NATURAL ENEMIES OF THE CASSAVA GREEN MITE, Mononychellus tanajoa (BONDAR

    Directory of Open Access Journals (Sweden)

    G.S. RODRIGUES

    1996-05-01

    Full Text Available Climatic similarity has been the primary parameter considered in the selection of sites for the collection and release of natural enemies in classical biological control programs. However, acknowledging the relevance of the composition of biological communities can be essential for improving the record of successful biocontrol projects, in relation to the proper selection of collection sites. We present in this paper an analysis of the plant and mite assemblages in cassava fields of northeastern Brazil. Such analysis is suggested as an additional criterion for the selection of collection sites of mite predators of the cassava green mite, Mononychellus tanajoa (Bondar, in an international biological control program. Contingency TABLES were built using Dice's index as an indicator of significant associations between pairs of species. This analysis enabled the identification of plant and mite species typically found together, indicating interspecific interactions or similar ecological requirements. Finally, a cluster analysis was used to group sites containing similar assemblages. These sites exhibit comparable chances of harboring a given species. Applied at the species-group level, the analysis may assist in better defining sites for the collection of natural enemies to be released in a given region, improving the chances of establishment.Similaridade climática é normalmente o principal parâmetro considerado na seleção de áreas de coleta e liberação de inimigos naturais de pragas em programas de controle biológico. Muitas vezes, contudo, a composição das comunidades biológicas presentes nessas áreas representa fator decisivo para a sobrevivência de um determinado organismo, melhorando ou dificultando seu estabelecimento. Este trabalho apresenta uma análise das sub-comunidades de ácaros e plantas que ocorrem em plantações de mandioca no nordeste brasileiro. A análise representa um critério adicional para a seleção de

  12. Numerical assessment of a criterion for the optimal choice of the operative conditions in magnetic nanoparticle hyperthermia on a realistic model of the human head.

    Science.gov (United States)

    Bellizzi, Gennaro; Bucci, Ovidio M; Chirico, Gaetano

    2016-09-01

    This paper presents a numerical study aiming at assessing the effectiveness of a recently proposed optimisation criterion for determining the optimal operative conditions in magnetic nanoparticle hyperthermia applied to the clinically relevant case of brain tumours. The study is carried out using the Zubal numerical phantom, and performing electromagnetic-thermal co-simulations. The Pennes model is used for thermal balance; the dissipation models for the magnetic nanoparticles are those available in the literature. The results concerning the optimal therapeutic concentration of nanoparticles, obtained through the analysis, are validated using experimental data on the specific absorption rate of iron oxide nanoparticles, available in the literature. The numerical estimates obtained by applying the criterion to the treatment of brain tumours shows that the acceptable values for the product between the magnetic field amplitude and frequency may be two to four times larger than the safety threshold of 4.85 × 10(8)A/m/s usually considered. This would allow the reduction of the dosage of nanoparticles required for an effective treatment. In particular, depending on the tumour depth, concentrations of nanoparticles smaller than 10 mg/mL of tumour may be sufficient for heating tumours smaller than 10 mm above 42 °C. Moreover, the study of the clinical scalability shows that, whatever the tumour position, lesions larger than 15 mm may be successfully treated with concentrations lower than 10 mg/mL. The criterion also allows the prediction of the temperature rise in healthy tissue, thus assuring safe treatment. The criterion can represent a helpful tool for planning and optimising an effective hyperthermia treatment.

  13. Numerical modelling of suffusion by discrete element method: a new internal stability criterion based on mechanical behaviour of eroded soil

    Directory of Open Access Journals (Sweden)

    Abdoulaye Hama Nadjibou

    2017-01-01

    Full Text Available Non-cohesive soils subjected to a flow may have a behavior in which fine particles migrate through the interstices of the solid skeleton formed by the large particles. This phenomenon is termed internal instability, internal erosion or suffusion, and can occur both in natural soil deposits and also in geotechnical structures such as dams, dikes or barrages. Internal instability of a granular material is its inability to prevent the loss of its fine particles under flow effect. It is geometrically possible if the fine particles can migrate through the pores of the coarse soil matrix and results in a change in its mechanical properties. In this work, we uses the three-dimensional Particle Flow Code (PFC3D/DEM to study the stability/instability of granular materials and their mechanical behavior. Kenney and Lau criterion sets a safe boundary for engineering design. However, it tends to identify stable soils as unstable ones. The effects of instability and erosion, simulated by clipping fine particles from the grading distribution, on the mechanical behaviour of glass ball samples were analysed. The mechanical properties of eroded samples, in which erosion is simulated and gives a new approach for internal stability. A proposal for a new internal stability criterion is established, it is deduced from the analysis of relations between the mechanical behaviour and internal stability, including material contractance.

  14. Numerical modelling of suffusion by discrete element method: a new internal stability criterion based on mechanical behaviour of eroded soil

    Science.gov (United States)

    Abdoulaye Hama, Nadjibou; Ouahbi, Tariq; Taibi, Said; Souli, Hanène; Fleureau, Jean-Marie; Pantet, Anne

    2017-06-01

    Non-cohesive soils subjected to a flow may have a behavior in which fine particles migrate through the interstices of the solid skeleton formed by the large particles. This phenomenon is termed internal instability, internal erosion or suffusion, and can occur both in natural soil deposits and also in geotechnical structures such as dams, dikes or barrages. Internal instability of a granular material is its inability to prevent the loss of its fine particles under flow effect. It is geometrically possible if the fine particles can migrate through the pores of the coarse soil matrix and results in a change in its mechanical properties. In this work, we uses the three-dimensional Particle Flow Code (PFC3D/DEM) to study the stability/instability of granular materials and their mechanical behavior. Kenney and Lau criterion sets a safe boundary for engineering design. However, it tends to identify stable soils as unstable ones. The effects of instability and erosion, simulated by clipping fine particles from the grading distribution, on the mechanical behaviour of glass ball samples were analysed. The mechanical properties of eroded samples, in which erosion is simulated and gives a new approach for internal stability. A proposal for a new internal stability criterion is established, it is deduced from the analysis of relations between the mechanical behaviour and internal stability, including material contractance.

  15. Evaluation and criterion determination of the low-k thin film adhesion by the surface acoustic waves with cohesive zone model

    Science.gov (United States)

    Xiao, Xia; Qi, Haiyang; Sui, Xiaole; Kikkawa, Takamaro

    2017-03-01

    The cohesive zone model (CZM) is introduced in the surface acoustic wave (SAW) technique to characterize the interfacial adhesion property of the low-k thin film deposited on the Silicon substrate. The ratio of the two parameters in the CZM, the maximum normal traction and normal interface characteristic length, is derived to evaluate the interfacial adhesion properties quantitatively. In this study, the adhesion criterion to judge the adhesion property is newly proposed by the CZM-SAW technique. The criterion determination processes of two kinds of film, dense and porous Black Diamond with different film thicknesses, are presented in this paper. The interfacial adhesion properties of the dense and porous Black Diamond films with different thicknesses are evaluated by the CZM-SAW technique quantitatively and nondestructively. The quantitative adhesion properties are obtained by fitting the experimental dispersion curves with maximum frequency up to 220 MHz with the theoretical ones. Results of the nondestructive CZM-SAW technique and the destructive nanoscratch exhibit the same trend in adhesion properties, which means that the CZM-SAW technique is a promising method for determining the interfacial adhesion. Meanwhile, the adhesion properties of the detected samples are judged by the determined criterion. The test results show that different test film materials with different film thicknesses ranging from 300 nm to 1000 nm are in different adhered conditions. This paper exhibits the advantage of the CZM-SAW technique which can be a universal method to characterize the film adhesion.

  16. Voter models with heterozygosity selection

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    2008-01-01

    Roč. 18, č. 1 (2008), s. 59-99 ISSN 1050-5164 R&D Projects: GA ČR GA201/06/1323; GA ČR GA201/07/0237 Institutional research plan: CEZ:AV0Z10750506 Keywords : Heterozygosity selection * rebellious voter model * branching * annihilation * survival * coexistence Subject RIV: BA - General Mathematics Impact factor: 1.285, year: 2008

  17. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  18. The maximum penalty criterion for ridge regression: application to the calibration of the force constant in elastic network models.

    Science.gov (United States)

    Dehouck, Yves; Bastolla, Ugo

    2017-07-17

    Tikhonov regularization, or ridge regression, is a popular technique to deal with collinearity in multivariate regression. We unveil a formal analogy between ridge regression and statistical mechanics, where the objective function is comparable to a free energy, and the ridge parameter plays the role of temperature. This analogy suggests two novel criteria for selecting a suitable ridge parameter: specific-heat (C v ) and maximum penalty (MP). We apply these fits to evaluate the relative contributions of rigid-body and internal fluctuations, which are typically highly collinear, to crystallographic B-factors. This issue is particularly important for computational models of protein dynamics, such as the elastic network model (ENM), since the amplitude of the predicted internal motion is commonly calibrated using B-factor data. After validation on simulated datasets, our results indicate that rigid-body motions account on average for more than 80% of the amplitude of B-factors. Furthermore, we evaluate the ability of different fits to reproduce the amplitudes of internal fluctuations in X-ray ensembles from the B-factors in the corresponding single X-ray structures. The new ridge criteria are shown to be markedly superior to the commonly used two-parameter fit that neglects rigid-body rotations and to the full fits regularized under generalized cross-validation. In conclusion, the proposed fits ensure a more robust calibration of the ENM force constant and should prove valuable in other applications.

  19. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  20. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though...

  1. Proposition of a multicriteria model to select logistics services providers

    Directory of Open Access Journals (Sweden)

    Miriam Catarina Soares Aharonovitz

    2014-06-01

    Full Text Available This study aims to propose a multicriteria model to select logistics service providers by the development of a decision tree. The methodology consists of a survey, which resulted in a sample of 181 responses. The sample was analyzed using statistic methods, descriptive statistics among them, multivariate analysis, variance analysis, and parametric tests to compare means. Based on these results, it was possible to obtain the decision tree and information to support the multicriteria analysis. The AHP (Analytic Hierarchy Process was applied to determine the data influence and thus ensure better consistency in the analysis. The decision tree categorizes the criteria according to the decision levels (strategic, tactical and operational. Furthermore, it allows to generically evaluate the importance of each criterion in the supplier selection process from the point of view of logistics services contractors.

  2. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    Science.gov (United States)

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Model selection for univariable fractional polynomials.

    Science.gov (United States)

    Royston, Patrick

    2017-07-01

    Since Royston and Altman's 1994 publication ( Journal of the Royal Statistical Society, Series C 43: 429-467), fractional polynomials have steadily gained popularity as a tool for flexible parametric modeling of regression relationships. In this article, I present fp_select, a postestimation tool for fp that allows the user to select a parsimonious fractional polynomial model according to a closed test procedure called the fractional polynomial selection procedure or function selection procedure. I also give a brief introduction to fractional polynomial models and provide examples of using fp and fp_select to select such models with real data.

  4. Evaluation of probabilistic flow predictions in sewer systems using grey box models and a skill score criterion

    DEFF Research Database (Denmark)

    Thordarson, Fannar Ørn; Breinholt, Anders; Møller, Jan Kloppenborg

    2012-01-01

    In this paper we show how the grey box methodology can be applied to find models that can describe the flow prediction uncertainty in a sewer system where rain data are used as input, and flow measurements are used for calibration and updating model states. Grey box models are composed of a drift...... and sharpness. In this paper, we illustrate the power of the introduced grey box methodology and the probabilistic performance measures in an urban drainage context....

  5. Kinematic Models of Southern California Deformation calibrated to GPS Velocities and a Strain Energy Minimization Criterion: How do they Differ?

    Science.gov (United States)

    Hearn, E. H.

    2015-12-01

    Fault slip rates inferred from GPS-calibrated kinematic models may be influenced by seismic-cycle and other transient effects, whereas models that minimize strain energy ("TSEM models") represent average deformation rates over geological timescales. To explore differences in southern California fault slip rates inferred from these two approaches, I have developed kinematic, finite-element models incorporating the UCERF3 block model-bounding fault geometry and slip rates from the UCERF3 report (Field et al., 2014). A fault segment (the "Ventura-Oak Ridge segment") was added to represent shortening accommodated collectively by the San Cayetano, Ventura, Oak Ridge, Red Mountain and other faults in the Transverse Ranges. Fault slip rates are randomly sampled from ranges given in the UCERF3 report, assuming a "boxcar" distribution, and models are scored by their misfit to GPS site velocities or to their total strain energy, for cases with locked and unlocked faults. Both Monte Carlo and Independence Sampler MCMC methods are used to identify the best models of each category. All four suites of models prefer low slip rates (i.e. less than about 5 mm/yr) on the Ventura-Oak Ridge fault system. For TSEM models, low rates (GPS-constrained, locked model prefers a high slip rate for the Imperial Fault (over 30 mm/yr), though the TSEM models prefer slip rates lower than 30 mm/yr. When slip rates for the Ventura-Oak Ridge fault system are restricted to less than 5 mm/yr, GPS-constrained models show a preference for high slip rates on the southern San Jacinto and Palos Verde Faults ( > 13 and > 3 mm/yr, respectively), and a somewhat low rate for the Mojave segment of the SAF (25-34 mm/yr). Because blind thrust faults of the Los Angeles Basin are not represented in the model, the inferred Ventura-Oak Ridge slip rate should be high, but the opposite is observed. GPS-calibrated models decisively prefer a lower slip rate along the Mojave segment of the SAF than TSEM models, consistent

  6. On the Modified Barkhausen Criterion

    DEFF Research Database (Denmark)

    Lindberg, Erik; Murali, K.

    2016-01-01

    Oscillators are normally designed according to the Modified Barkhausen Criterion i.e. the complex pole pair is moved out in RHP so that the linear circuit becomes unstable. By means of the Mancini Phaseshift Oscillator it is demonstrated that the distortion of the oscillator may be minimized by i...... by introducing a nonlinear ”Hewlett Resistor” so that the complex pole-pair is in the RHP for small signals and in the LHP for large signals i.e. the complex pole pair of the instant linearized small signal model is moving around the imaginary axis in the complex frequency plane....

  7. General Criterion for Harmonicity

    Science.gov (United States)

    Proesmans, Karel; Vandebroek, Hans; Van den Broeck, Christian

    2017-10-01

    Inspired by Kubo-Anderson Markov processes, we introduce a new class of transfer matrices whose largest eigenvalue is determined by a simple explicit algebraic equation. Applications include the free energy calculation for various equilibrium systems and a general criterion for perfect harmonicity, i.e., a free energy that is exactly quadratic in the external field. As an illustration, we construct a "perfect spring," namely, a polymer with non-Gaussian, exponentially distributed subunits which, nevertheless, remains harmonic until it is fully stretched. This surprising discovery is confirmed by Monte Carlo and Langevin simulations.

  8. Seismogenic Potential of a Gouge-filled Fault and the Criterion for Its Slip Stability: Constraints From a Microphysical Model

    Science.gov (United States)

    Chen, Jianye; Niemeijer, A. R.

    2017-12-01

    Physical constraints for the parameters of the rate-and-state friction (RSF) laws have been mostly lacking. We presented such constraints based on a microphysical model and demonstrated the general applicability to granular fault gouges deforming under hydrothermal conditions in a companion paper. In this paper, we examine the transition velocities for contrasting frictional behavior (i.e., strengthening to weakening and vice versa) and the slip stability of the model. The model predicts a steady state friction coefficient that increases with slip rate at very low and high slip rates and decreases in between. This allows the transition velocities to be theoretically obtained and the unstable slip regime (Vs→w static stress drop (Δμs) associated with self-sustained oscillations or stick slips. Numerical implementation of the model predicts frictional behavior that exhibits consecutive transitions from stable sliding, via periodic oscillations, to unstable stick slips with decreasing elastic stiffness or loading rate, and gives Kc, Wc, Δμs, Vs→w, and Vw→s values that are consistent with the analytical predictions. General scaling relations of these parameters given by the model are consistent with previous interpretations in the context of RSF laws and agree well with previous experiments, testifying to high validity. From these physics-based expressions that allow a more reliable extrapolation to natural conditions, we discuss the seismological implications for natural faults and present topics for future work.

  9. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  10. Maximum principal strain as a criterion for prediction of orthodontic mini-implants failure in subject-specific finite element models.

    Science.gov (United States)

    Albogha, Mhd Hassan; Kitahara, Toru; Todo, Mitsugu; Hyakutake, Hiroto; Takahashi, Ichiro

    2016-01-01

    To investigate the most reliable stress or strain parameters in subject-specific finite element (FE) models to predict success or failure of orthodontic mini-implants (OMIs). Subject-specific FE analysis was applied to 28 OMIs used for anchorage. Each model was developed using two computed tomography data sets, the first taken before OMI placement and the second taken immediately after placement. Of the 28 OMIs, 6 failed during the first 5 months, and 22 were successful. The bone compartment was divided into four zones in the FE models, and peak stress and strain parameters were calculated for each. Logistic regression of the failure (vs success) of OMIs on the stress and strain parameters in the models was conducted to verify the ability of these parameters to predict OMI failure. Failure was significantly dependent on principal strain parameters rather than stress parameters. Peak maximum principal strain in the bone 0.5 to 1 mm from the OMI surface was the best predictor of failure (R(2) = 0.8151). We propose the use of the maximum principal strain as a criterion for predicting OMI failure in FE models.

  11. A Yield Strength Model and Thoughts on an Ignition Criterion for a Reactive PTFE-Aluminum Composite

    Science.gov (United States)

    2008-08-01

    original cylinder. The conical surface shown in Figure 2 displays evidence of ductile flow, and its blackened regions suggest ignition. On the basis of...this specimen, we developed a shear localization hypothesis and adopted a metals -like approach to modeling mechanical properties of PTFE-Al, i.e., an...to the phenomenon of shear localization such as occurs in certain metals . The material is locally unable to equilibrate its load. Additional plastic

  12. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion.

    Science.gov (United States)

    Catanzaro, Daniele; Ravi, Ramamoorthi; Schwartz, Russell

    2013-01-23

    Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove that these

  13. Selected sports talent development models

    OpenAIRE

    Michal Vičar

    2017-01-01

    Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...

  14. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  15. Stochastic Learning and the Intuitive Criterion in Simple Signaling Games

    DEFF Research Database (Denmark)

    Sloth, Birgitte; Whitta-Jacobsen, Hans Jørgen

    A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion......A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion...

  16. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  17. The linear utility model for optimal selection

    NARCIS (Netherlands)

    Mellenbergh, Gideon J.; van der Linden, Willem J.

    A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting

  18. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  19. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  20. Bovine Host Genetic Variation Influences Rumen Microbial Methane Production with Best Selection Criterion for Low Methane Emitting and Efficiently Feed Converting Hosts Based on Metagenomic Gene Abundance.

    Directory of Open Access Journals (Sweden)

    Rainer Roehe

    2016-02-01

    Full Text Available Methane produced by methanogenic archaea in ruminants contributes significantly to anthropogenic greenhouse gas emissions. The host genetic link controlling microbial methane production is unknown and appropriate genetic selection strategies are not developed. We used sire progeny group differences to estimate the host genetic influence on rumen microbial methane production in a factorial experiment consisting of crossbred breed types and diets. Rumen metagenomic profiling was undertaken to investigate links between microbial genes and methane emissions or feed conversion efficiency. Sire progeny groups differed significantly in their methane emissions measured in respiration chambers. Ranking of the sire progeny groups based on methane emissions or relative archaeal abundance was consistent overall and within diet, suggesting that archaeal abundance in ruminal digesta is under host genetic control and can be used to genetically select animals without measuring methane directly. In the metagenomic analysis of rumen contents, we identified 3970 microbial genes of which 20 and 49 genes were significantly associated with methane emissions and feed conversion efficiency respectively. These explained 81% and 86% of the respective variation and were clustered in distinct functional gene networks. Methanogenesis genes (e.g. mcrA and fmdB were associated with methane emissions, whilst host-microbiome cross talk genes (e.g. TSTA3 and FucI were associated with feed conversion efficiency. These results strengthen the idea that the host animal controls its own microbiota to a significant extent and open up the implementation of effective breeding strategies using rumen microbial gene abundance as a predictor for difficult-to-measure traits on a large number of hosts. Generally, the results provide a proof of principle to use the relative abundance of microbial genes in the gastrointestinal tract of different species to predict their influence on traits e

  1. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  2. Value of prostate specific antigen and prostatic volume ratio (PSA/V) as the selection criterion for US-guided prostatic biopsy

    International Nuclear Information System (INIS)

    Veneziano, S.; Paulica, P.; Querze', R.; Viglietta, G.; Trenta, A.

    1991-01-01

    US-guided biopsy was performed in 94 patients with suspected lesions at transerectal US. Histology demonstrated carcinoma in 43 cases, benign hyperplasia in 44, and prostatis in 7. In all cases the prostate specific antigen (PSA) was calculated, by means of US, together with prostatic volume (v). PSA was related to the corresponding gland volume, which resulted in PSA/V ratio. Our study showed PSA/V ration to have higher sensitivity and specificity than absolulute PSA value in the diagnosis of prostatic carcinoma. The authors believe prostate US-guided biopsy to be: a) necessary when the suspected area has PSA/V ratio >0.15, and especially when PSA/V >0.30; b) not indicated when echo-structural alterations are associated with PSA/V <0.15, because they are most frequently due to benign lesions. The combined use of PSA/V ratio and US is therefore suggested to select the patients in whom biopsy is to be performed

  3. Selection of classification models from repository of model for water ...

    African Journals Online (AJOL)

    This paper proposes a new technique, Model Selection Technique (MST) for selection and ranking of models from the repository of models by combining three performance measures (Acc, TPR and TNR). This technique provides weightage to each performance measure to find the most suitable model from the repository of ...

  4. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  5. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  6. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  7. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  8. Graphical tools for model selection in generalized linear models.

    Science.gov (United States)

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  10. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  11. Model Estimation Using Ridge Regression with the Variance Normalization Criterion. Interim Report No. 2. The Education and Inequality in Canada Project.

    Science.gov (United States)

    Lee, Wan-Fung; Bulcock, Jeffrey Wilson

    The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…

  12. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    Directory of Open Access Journals (Sweden)

    Sveiczer Akos

    2006-03-01

    Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.

  13. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  14. Selecting model complexity in learning problems

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L. [Los Alamos National Lab., NM (United States); Kumar, P.R. [Illinois Univ., Urbana, IL (United States). Coordinated Science Lab.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  15. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  16. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  17. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  18. Systematic narrative review of decision frameworks to select the appropriate modelling approaches for health economic evaluations.

    Science.gov (United States)

    Tsoi, B; O'Reilly, D; Jegathisawaran, J; Tarride, J-E; Blackhouse, G; Goeree, R

    2015-06-17

    In constructing or appraising a health economic model, an early consideration is whether the modelling approach selected is appropriate for the given decision problem. Frameworks and taxonomies that distinguish between modelling approaches can help make this decision more systematic and this study aims to identify and compare the decision frameworks proposed to date on this topic area. A systematic review was conducted to identify frameworks from peer-reviewed and grey literature sources. The following databases were searched: OVID Medline and EMBASE; Wiley's Cochrane Library and Health Economic Evaluation Database; PubMed; and ProQuest. Eight decision frameworks were identified, each focused on a different set of modelling approaches and employing a different collection of selection criterion. The selection criteria can be categorized as either: (i) structural features (i.e. technical elements that are factual in nature) or (ii) practical considerations (i.e. context-dependent attributes). The most commonly mentioned structural features were population resolution (i.e. aggregate vs. individual) and interactivity (i.e. static vs. dynamic). Furthermore, understanding the needs of the end-users and stakeholders was frequently incorporated as a criterion within these frameworks. There is presently no universally-accepted framework for selecting an economic modelling approach. Rather, each highlights different criteria that may be of importance when determining whether a modelling approach is appropriate. Further discussion is thus necessary as the modelling approach selected will impact the validity of the underlying economic model and have downstream implications on its efficiency, transparency and relevance to decision-makers.

  19. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  20. Is body weight the most appropriate criterion to select patients eligible for low-dose pulmonary CT angiography? Analysis of objective and subjective image quality at 80 kVp in 100 patients

    Energy Technology Data Exchange (ETDEWEB)

    Szucs-Farkas, Zsolt; Strautz, Tamara; Patak, Michael A.; Kurmann, Luzia; Vock, Peter; Schindera, Sebastian T. [University Hospital and University of Berne, Department of Diagnostic, Interventional and Paediatric Radiology, Berne (Switzerland)

    2009-08-15

    The objective of this retrospective study was to assess image quality with pulmonary CT angiography (CTA) using 80 kVp and to find anthropomorphic parameters other than body weight (BW) to serve as selection criteria for low-dose CTA. Attenuation in the pulmonary arteries, anteroposterior and lateral diameters, cross-sectional area and soft-tissue thickness of the chest were measured in 100 consecutive patients weighing less than 100 kg with 80 kVp pulmonary CTA. Body surface area (BSA) and contrast-to-noise ratios (CNR) were calculated. Three radiologists analyzed arterial enhancement, noise, and image quality. Image parameters between patients grouped by BW (group 1: 0-50 kg; groups 2-6: 51-100 kg, decadelly increasing) were compared. CNR was higher in patients weighing less than 60 kg than in the BW groups 71-99 kg (P between 0.025 and <0.001). Subjective ranking of enhancement (P=0.165-0.605), noise (P=0.063), and image quality (P=0.079) did not differ significantly across all patient groups. CNR correlated moderately strongly with weight (R=-0.585), BSA (R=-0.582), cross-sectional area (R=-0.544), and anteroposterior diameter of the chest (R=-0.457; P<0.001 all parameters). We conclude that 80 kVp pulmonary CTA permits diagnostic image quality in patients weighing up to 100 kg. Body weight is a suitable criterion to select patients for low-dose pulmonary CTA. (orig.)

  1. Suboptimal Criterion Learning in Static and Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Elyse H Norton

    2017-01-01

    Full Text Available Humans often make decisions based on uncertain sensory information. Signal detection theory (SDT describes detection and discrimination decisions as a comparison of stimulus "strength" to a fixed decision criterion. However, recent research suggests that current responses depend on the recent history of stimuli and previous responses, suggesting that the decision criterion is updated trial-by-trial. The mechanisms underpinning criterion setting remain unknown. Here, we examine how observers learn to set a decision criterion in an orientation-discrimination task under both static and dynamic conditions. To investigate mechanisms underlying trial-by-trial criterion placement, we introduce a novel task in which participants explicitly set the criterion, and compare it to a more traditional discrimination task, allowing us to model this explicit indication of criterion dynamics. In each task, stimuli were ellipses with principal orientations drawn from two categories: Gaussian distributions with different means and equal variance. In the covert-criterion task, observers categorized a displayed ellipse. In the overt-criterion task, observers adjusted the orientation of a line that served as the discrimination criterion for a subsequently presented ellipse. We compared performance to the ideal Bayesian learner and several suboptimal models that varied in both computational and memory demands. Under static and dynamic conditions, we found that, in both tasks, observers used suboptimal learning rules. In most conditions, a model in which the recent history of past samples determines a belief about category means fit the data best for most observers and on average. Our results reveal dynamic adjustment of discrimination criterion, even after prolonged training, and indicate how decision criteria are updated over time.

  2. On spatial mutation-selection models

    Energy Technology Data Exchange (ETDEWEB)

    Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Minlos, Robert, E-mail: minl@iitp.ru; Pirogov, Sergey, E-mail: pirogov@proc.ru [IITP, RAS, Bolshoi Karetnyi 19, Moscow (Russian Federation)

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  3. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  4. Comparison of Desertification Intensity in the Purified Wastewater Irrigated Lands with Normal Lands in Yazd Using of Soil Criterion of the IMDPA Model

    Directory of Open Access Journals (Sweden)

    M. Yektafar

    2016-09-01

    Full Text Available Introduction: Desertification, is a complex phenomenon, which as environmental, socio-economical, and cultural impacts on natural resources. According to the United Nations Convention to Combat Desertification defination, desertification is land degradation in arid, semi-arid, and dry sub-humid regions, resulting from climate change and human activities. Because of limiting access to qualified water resources in arid lands, it is necessary to use, all forms of acceptable water resources such as wastewater. Since irrigation with sewages has most effects on soil, in this research, desertification intensity of lands irrigated with sewages and natural lands of the area, where located near Yazd city, has been analyzed considering soil criterion of the Iranian Model for Desertification Potential Assessment (IMDPA. Several studies have done in Iran and in the world in order to provide national, regional or global desertification assessment models. A significant feature of the IMDPA is easily defining and measuring criteria, indicators, and ability of the model to use geometric means for the criteria and indicators. Materials and Methods: In first step, In first step, in a random method, soil samples were taken in each of the defined land units with considering of the size of area. Next, all indices related to the soil criterion such as soil texture index, soil deep gravel percentage, soil depth, and soil electrical conductivity were evaluated in each land use (both irrigated lands and natural lands and weighted considering the present conditions of the lands. Each index was scored according to the standard table of soil that categorized desertification. Then, geometry average of all indices were calculated and map of the desertification intensity of the study area were prepared. Thus, four maps were prepared according to each index. These maps were used to study both quality and effect of each index on desertification. Finally, these maps were

  5. A Difference Criterion for Dimensionality Reduction

    Science.gov (United States)

    Aved, A. J.; Blasch, E.; Peng, J.

    2015-12-01

    A dynamic data-driven geoscience application includes hyperspectral scene classification which has shown promising potential in many remote-sensing applications. A hyperspectral image of a scene spectral radiance is typically measured by hundreds of contiguous spectral bands or features, ranging from visible/near-infrared (VNIR) to shortwave infrared (SWIR). Spectral-reflectance measurements provide rich information for object detection and classification. On the other hand, they generate a large number of features, resulting in a high dimensional measurement space. However, a large number of features often poses challenges and can result in poor classification performance. This is due to the curse of dimensionality which requires model reduction, uncertainty quantification and optimization for real-world applications. In such situations, feature extraction or selection methods play an important role by significantly reducing the number of features for building classifiers. In this work, we focus on efficient feature extraction using the dynamic data-driven applications systems (DDDAS) paradigm. Many dimension reduction techniques have been proposed in the literature. A well-known technique is Fisher's linear discriminant analysis (LDA). LDA finds the projection matrix that simultaneously maximizes a within class scatter matrix and minimizes a between class scatter matrix. However, LDA requires matrix inverse which can be a major issue when the within matrix is singular. We propose a difference criterion for dimension reduction that does not require a matrix inverse for software implementation. We show how to solve the optimization problem with semi-definite programming. In addition, we establish an error bound for the proposed algorithm. We demonstrate the connection between relief feature selection and a two class formulation of multi-class problems, thereby providing a sound basis for observed benefits associated with this formulation. Finally, we provide

  6. Comparative Study on the Selection Criteria for Fitting Flood Frequency Distribution Models with Emphasis on Upper-Tail Behavior

    Directory of Open Access Journals (Sweden)

    Xiaohong Chen

    2017-05-01

    Full Text Available The upper tail of a flood frequency distribution is always specifically concerned with flood control. However, different model selection criteria often give different optimal distributions when the focus is on the upper tail of distribution. With emphasis on the upper-tail behavior, five distribution selection criteria including two hypothesis tests and three information-based criteria are evaluated in selecting the best fitted distribution from eight widely used distributions by using datasets from Thames River, Wabash River, Beijiang River and Huai River. The performance of the five selection criteria is verified by using a composite criterion with focus on upper tail events. This paper demonstrated an approach for optimally selecting suitable flood frequency distributions. Results illustrate that (1 there are different selections of frequency distributions in the four rivers by using hypothesis tests and information-based criteria approaches. Hypothesis tests are more likely to choose complex, parametric models, and information-based criteria prefer to choose simple, effective models. Different selection criteria have no particular tendency toward the tail of the distribution; (2 The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. The distributions selected by information-based criteria are more likely to be close to true values than the distributions selected by hypothesis test methods in the upper tail of the frequency curve; (3 The proposed composite criterion not only can select the optimal distribution, but also can evaluate the error of estimated value, which often plays an important role in the risk assessment and engineering design. In order to decide on a particular distribution to fit the high flow, it would be better to use the composite criterion.

  7. Sparse model selection via integral terms

    Science.gov (United States)

    Schaeffer, Hayden; McCalla, Scott G.

    2017-08-01

    Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.

  8. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  9. Modeling and Selection of Software Service Variants

    OpenAIRE

    Wittern, John Erik

    2015-01-01

    Providers and consumers have to deal with variants, meaning alternative instances of a service?s design, implementation, deployment, or operation, when developing or delivering software services. This work presents service feature modeling to deal with associated challenges, comprising a language to represent software service variants and a set of methods for modeling and subsequent variant selection. This work?s evaluation includes a POC implementation and two real-life use cases.

  10. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platfor...

  11. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  12. A Failure Criterion for Concrete

    DEFF Research Database (Denmark)

    Ottosen, N. S.

    1977-01-01

    A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...

  13. On Using Selection Procedures with Binomial Models.

    Science.gov (United States)

    1983-10-01

    eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number

  14. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  15. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  16. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  17. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  18. Chemical identification using Bayesian model selection

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom; Fry, H. A. (Herbert A.); McVey, B. D. (Brian D.); Sander, E. (Eric)

    2002-01-01

    Remote detection and identification of chemicals in a scene is a challenging problem. We introduce an approach that uses some of the image's pixels to establish the background characteristics while other pixels represent the target for which we seek to identify all chemical species present. This leads to a generalized least squares problem in which we focus on 'subset selection' to identify the chemicals thought to be present. Bayesian model selection allows us to approximate the posterior probability that each chemical in the library is present by adding the posterior probabilities of all the subsets which include the chemical. We present results using realistic simulated data for the case with 1 to 5 chemicals present in each target and compare performance to a hybrid of forward and backward stepwise selection procedure using the F statistic.

  19. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  20. Distance criterion for hydrogen bond

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.

  1. A Failure Criterion for Concrete

    DEFF Research Database (Denmark)

    Ottosen, N. S.

    1977-01-01

    A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...... are given for three typical ratios. A review of some earlier proposed failure criteria is included....

  2. Centrality as a Prior Criterion.

    Science.gov (United States)

    Lincoln, Yvonna S.; Tuttle, Jane

    Program discontinuance at colleges and universities is often linked to issues of program demand and quality. However, neither low demand nor low quality is sufficient for program discontinuance without a judgment on the criterion of the centrality of the program to the institution's core mission. Colleges' retrenchment and survival mechanisms…

  3. Reserve selection using nonlinear species distribution models.

    Science.gov (United States)

    Moilanen, Atte

    2005-06-01

    Reserve design is concerned with optimal selection of sites for new conservation areas. Spatial reserve design explicitly considers the spatial pattern of the proposed reserve network and the effects of that pattern on reserve cost and/or ability to maintain species there. The vast majority of reserve selection formulations have assumed a linear problem structure, which effectively means that the biological value of a potential reserve site does not depend on the pattern of selected cells. However, spatial population dynamics and autocorrelation cause the biological values of neighboring sites to be interdependent. Habitat degradation may have indirect negative effects on biodiversity in areas neighboring the degraded site as a result of, for example, negative edge effects or lower permeability for animal movement. In this study, I present a formulation and a spatial optimization algorithm for nonlinear reserve selection problems in grid-based landscapes that accounts for interdependent site values. The method is demonstrated using habitat maps and nonlinear habitat models for threatened birds in the Netherlands, and it is shown that near-optimal solutions are found for regions consisting of up to hundreds of thousands grid cells, a landscape size much larger than those commonly attempted even with linear reserve selection formulations.

  4. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  5. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  6. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  8. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  9. Novel metrics for growth model selection.

    Science.gov (United States)

    Grigsby, Matthew R; Di, Junrui; Leroux, Andrew; Zipunnikov, Vadim; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-01-01

    Literature surrounding the statistical modeling of childhood growth data involves a diverse set of potential models from which investigators can choose. However, the lack of a comprehensive framework for comparing non-nested models leads to difficulty in assessing model performance. This paper proposes a framework for comparing non-nested growth models using novel metrics of predictive accuracy based on modifications of the mean squared error criteria. Three metrics were created: normalized, age-adjusted, and weighted mean squared error (MSE). Predictive performance metrics were used to compare linear mixed effects models and functional regression models. Prediction accuracy was assessed by partitioning the observed data into training and test datasets. This partitioning was constructed to assess prediction accuracy for backward (i.e., early growth), forward (i.e., late growth), in-range, and on new-individuals. Analyses were done with height measurements from 215 Peruvian children with data spanning from near birth to 2 years of age. Functional models outperformed linear mixed effects models in all scenarios tested. In particular, prediction errors for functional concurrent regression (FCR) and functional principal component analysis models were approximately 6% lower when compared to linear mixed effects models. When we weighted subject-specific MSEs according to subject-specific growth rates during infancy, we found that FCR was the best performer in all scenarios. With this novel approach, we can quantitatively compare non-nested models and weight subgroups of interest to select the best performing growth model for a particular application or problem at hand.

  10. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  11. A systems evaluation model for selecting spent nuclear fuel storage concepts

    International Nuclear Information System (INIS)

    Postula, F.D.; Finch, W.C.; Morissette, R.P.

    1982-01-01

    This paper describes a system evaluation approach used to identify and evaluate monitored, retrievable fuel storage concepts that fulfill ten key criteria for meeting the functional requirements and system objectives of the National Nuclear Waste Management Program. The selection criteria include health and safety, schedules, costs, socio-economic factors and environmental factors. The methodology used to establish the selection criteria, develop a weight of importance for each criterion and assess the relative merit of each storage system is discussed. The impact of cost relative to technical criteria is examined along with experience in obtaining relative merit data and its application in the model. Topics considered include spent fuel storage requirements, functional requirements, preliminary screening, and Monitored Retrievable Storage (MRS) system evaluation. It is concluded that the proposed system evaluation model is universally applicable when many concepts in various stages of design and cost development need to be evaluated

  12. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  13. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  14. Failure Criterion for Brick Masonry: A Micro-Mechanics Approach

    Science.gov (United States)

    Kawa, Marek

    2015-02-01

    The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading conditions at different orientations of bed joints. The performance of the approach is verified against solutions obtained for corresponding layered and block microstructures, which provides the upper and lower strength bounds for masonry microstructure, respectively. Subsequently, a phenomenological anisotropic strength criterion for masonry microstructure is proposed. The criterion has a form of conjunction of Jaeger critical plane condition and Tsai-Wu criterion. The model proposed is identified based on the fitting of numerical results obtained from the microstructural analysis. Identified criterion is then verified against results obtained for different loading orientations. It appears that strength of masonry microstructure can be satisfactorily described by the criterion proposed.

  15. Failure Criterion for Brick Masonry: A Micro-Mechanics Approach

    Directory of Open Access Journals (Sweden)

    Kawa Marek

    2015-02-01

    Full Text Available The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading conditions at different orientations of bed joints. The performance of the approach is verified against solutions obtained for corresponding layered and block microstructures, which provides the upper and lower strength bounds for masonry microstructure, respectively. Subsequently, a phenomenological anisotropic strength criterion for masonry microstructure is proposed. The criterion has a form of conjunction of Jaeger critical plane condition and Tsai-Wu criterion. The model proposed is identified based on the fitting of numerical results obtained from the microstructural analysis. Identified criterion is then verified against results obtained for different loading orientations. It appears that strength of masonry microstructure can be satisfactorily described by the criterion proposed.

  16. Bayesian Model Selection in Geophysics: The evidence

    Science.gov (United States)

    Vrugt, J. A.

    2016-12-01

    Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.

  17. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  18. Decision criterion dynamics in animals performing an auditory detection task.

    Directory of Open Access Journals (Sweden)

    Robert W Mill

    Full Text Available Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding "yes" during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans and in other psychological domains (e.g., vision, memory.

  19. Selecting a model of supersymmetry breaking mediation

    International Nuclear Information System (INIS)

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-01-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  20. A multipole acceptability criterion for electronic structure theory

    International Nuclear Information System (INIS)

    Schwegler, E.; Challacombe, M.; Head-Gordon, M.

    1998-01-01

    Accurate and computationally inexpensive estimates of multipole expansion errors are crucial to the success of several fast electronic structure methods. In this paper, a new nonempirical multipole acceptability criterion is described that is directly applicable to expansions of high order moments. Several model calculations typical of electronic structure theory are presented to demonstrate its performance. For cases involving small translation distances, accuracies are increased by up to five orders of magnitude over an empirical criterion. The new multipole acceptance criterion is on average within an order of magnitude of the exact expansion error. Use of the multipole acceptance criterion in hierarchical multipole based methods as well as in traditional electronic structure methods is discussed. copyright 1998 American Institute of Physics

  1. Multi-Criterion Two-Sided Matching of Public–Private Partnership Infrastructure Projects: Criteria and Methods

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-04-01

    Full Text Available Two kinds of evaluative criteria are associated with Public–Private Partnership (PPP infrastructure projects, i.e., private evaluative criteria and public evaluative criteria. These evaluative criteria are inversely related, that is, the higher the public benefits; the lower the private surplus. To balance evaluative criteria in the Two-Sided Matching (TSM decision, this paper develops a quantitative matching decision model to select an optimal matching scheme for PPP infrastructure projects based on the Hesitant Fuzzy Set (HFS under unknown evaluative criterion weights. In the model, HFS is introduced to describe values of the evaluative criteria and multi-criterion information is fully considered given by groups. The optimal model is built and solved by maximizing the whole deviation of each criterion so that the evaluative criterion weights are determined objectively. Then, the match-degree of the two sides is calculated and a multi-objective optimization model is introduced to select an optimal matching scheme via a min-max approach. The results provide new insights and implications of the influence on evaluative criteria in the TSM decision.

  2. Extensions and applications of the Bohm criterion

    Science.gov (United States)

    Baalrud, Scott D.; Scheiner, Brett; Yee, Benjamin; Hopkins, Matthew; Barnat, Edward

    2015-04-01

    The generalized Bohm criterion is revisited in the context of incorporating kinetic effects of the electron and ion distribution functions into the theory. The underlying assumptions and results of two different approaches are compared: the conventional ‘kinetic Bohm criterion’ and a fluid-moment hierarchy approach. The former is based on the asymptotic limit of an infinitely thin sheath (λD/l = 0), whereas the latter is based on a perturbative expansion of a sheath that is thin compared to the plasma (λD/l ≪ 1). Here λD is the Debye length, which characterizes the sheath length scale, and l is a measure of the plasma or presheath length scale. The consequences of these assumptions are discussed in terms of how they restrict the class of distribution functions to which the resulting criteria can be applied. Two examples are considered to provide concrete comparisons between the two approaches. The first is a Tonks-Langmuir model including a warm ion source (Robertson 2009 Phys. Plasmas 16 103503). This highlights a substantial difference between the conventional kinetic theory, which predicts slow ions dominate at the sheath edge, and the fluid moment approach, which predicts slow ions have little influence. The second example considers planar electrostatic probes biased near the plasma potential using model equations and particle-in-cell simulations. This demonstrates a situation where electron kinetic effects alter the Bohm criterion, leading to a subsonic ion flow at the sheath edge.

  3. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  4. Variable Selection in Multivariable Regression Using SAS/IML

    Directory of Open Access Journals (Sweden)

    Ali A. Al-Subaihi

    2002-11-01

    Full Text Available This paper introduces a SAS/IML program to select among the multivariate model candidates based on a few well-known multivariate model selection criteria. Stepwise regression and all-possible-regression are considered. The program is user friendly and requires the user to paste or read the data at the beginning of the module, include the names of the dependent and independent variables (the y's and the x's, and then run the module. The program produces the multivariate candidate models based on the following criteria: Forward Selection, Forward Stepwise Regression, Backward Elimination, Mean Square Error, Coefficient of Multiple Determination, Adjusted Coefficient of Multiple Determination, Akaike's Information Criterion, the Corrected Form of Akaike's Information Criterion, Hannan and Quinn Information Criterion, the Corrected Form of Hannan and Quinn (HQc Information Criterion, Schwarz's Criterion, and Mallow's PC. The output also constitutes detailed as well as summarized results.

  5. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  6. Overcoming the Criterion Problem in the Evaluation of Library Performance.

    Science.gov (United States)

    Knightly, John J.

    1979-01-01

    Library performance criteria used by managers in 62 academic, special, and public libraries are analyzed to measure the extent to which criteria proposed in the literature are actually used, to identify types of criteria in use, and to develop guidelines for future criterion selection. (Author)

  7. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  8. Formulation of cross-anisotropic failure criterion for soils

    Directory of Open Access Journals (Sweden)

    Yi-fei Sun

    2013-10-01

    Full Text Available Inherently anisotropic soil fabric has a considerable influence on soil strength. To model this kind of inherent anisotropy, a three-dimensional anisotropic failure criterion was proposed, employing a scalar-valued anisotropic variable and a modified general threedimensional isotropic failure criterion. The scalar-valued anisotropic variable in all sectors of the deviatoric plane was defined by correlating a normalized stress tensor with a normalized fabric tensor. Detailed comparison between the available experimental data and the corresponding model predictions in the deviatoric plane was conducted. The proposed failure criterion was shown to well predict the failure behavior in all sectors, especially in sector II with the Lode angle ranging between 60° and 120°, where the prediction was almost in accordance with test data. However, it was also observed that the proposed criterion overestimated the strength of dense Santa Monica Beach sand in sector III where the intermediate principal stress ratio b varied from approximately 0.2 to 0.8, and slightly underestimated the strength when b was between approximately 0.8 and 1. The difference between the model predictions and experimental data was due to the occurrence of shear bending, which might reduce the measured strength. Therefore, the proposed anisotropic failure criterion has a strong ability to characterize the failure behavior of various soils and potentially allows a better description of the influence of the loading direction with respect to the soil fabric.

  9. A new Russell model for selecting suppliers

    NARCIS (Netherlands)

    Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza

    2014-01-01

    Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to

  10. Selective experimental review of the Standard Model

    International Nuclear Information System (INIS)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures

  11. ADDED VALUE AS EFFICIENCY CRITERION FOR INDUSTRIAL PRODUCTION PROCESS

    Directory of Open Access Journals (Sweden)

    L. M. Korotkevich

    2016-01-01

    Full Text Available Literary analysis has shown that the majority of researchers are using classical efficiency criteria for construction of an optimization model for production process: profit maximization; cost minimization; maximization of commercial product output; minimization of back-log for product demand; minimization of total time consumption due to production change. The paper proposes to use an index of added value as an efficiency criterion because it combines economic and social interests of all main interested subjects of the business activity: national government, property owners, employees, investors. The following types of added value have been considered in the paper: joint-stock, market, monetary, economic, notional (gross, net, real. The paper makes suggestion to use an index of real value added as an efficiency criterion. Such approach permits to bring notional added value in comparable variant because added value can be increased not only due to efficiency improvement of enterprise activity but also due to environmental factors – excess in rate of export price increases over rate of import growth. An analysis of methods for calculation of real value added has been made on a country-by-country basis (extrapolation, simple and double deflation. A method of double deflation has been selected on the basis of the executed analysis and it is counted according to the Laspeyires, Paasche, Fischer indices. A conclusion has been made that the used expressions do not take into account fully economic peculiarities of the Republic of Belarus: they are considered as inappropriate in the case when product cost is differentiated according to marketing outlets; they do not take account of difference in rate of several currencies and such approach is reflected in export price of a released product and import price for raw material, supplies and component parts. Taking this into consideration expressions for calculation of real value added have been specified

  12. Common Criterion For Failure Of Different Materials

    Science.gov (United States)

    Beyer, Rodney B.

    1992-01-01

    Common scaling criterion found to relate some physical quantities characterizing tensile failures of three different solid propellant materials. Tensile failures of different rubbery propellants characterized by similar plots.

  13. Model selection in Bayesian segmentation of multiple DNA alignments.

    Science.gov (United States)

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  14. Extended equal areas criterion: foundations and applications

    Energy Technology Data Exchange (ETDEWEB)

    Yusheng, Xue [Nanjim Automation Research Institute, Nanjim (China)

    1994-12-31

    The extended equal area criterion (EEAC) provides analytical expressions for ultra fast transient stability assessment, flexible sensitivity analysis, and means to preventive and emergency controls. Its outstanding performances have been demonstrated by thousands upon thousands simulations on more than 50 real power systems and by on-line operation records in an EMS environment of Northeast China Power System since September 1992. However, the researchers have mainly based on heuristics and simulations. This paper lays a theoretical foundation of EEAC and brings to light the mechanism of transient stability. It proves true that the dynamic EEAC furnishes a necessary and sufficient condition for stability of multi machine systems with any detailed models, in the sense of the integration accuracy. This establishes a new platform for further advancing EEAC and better understanding of problems. An overview of EEAC applications in China is also given in this paper. (author) 30 refs.

  15. A New Elasto-Viscoplastic Damage Model Combined with the Generalized Hoek-Brown Failure Criterion for Bedded Rock Salt and its Application

    Science.gov (United States)

    Ma, Lin-jian; Liu, Xin-yu; Fang, Qin; Xu, Hong-fa; Xia, Hui-min; Li, Er-bing; Yang, Shi-gang; Li, Wen-pei

    2013-01-01

    According to the requirement of the West-East Gas Transmission Project in China, the solution-mined cavities located in the Jintan bedded salt formation of Jiangsu province will be utilized for natural gas storage. This task is more challenging than conventional salt dome cavern construction and operation due to the heterogeneous bedding layers of the bedded salt formation. A three-dimensional creep damage constitutive model combined with the generalized Hoek-Brown model is exclusively formulated and validated with a series of strength and creep tests for the bedded rock salt. The viscoplastic model, which takes the coupled creep damage and the failure behavior under various stress states into account, enables both the three creep phases and the deformation induced by vicious damage and plastic flow to be calculated. A further geomechanical analysis of the rapid gas withdrawal for the thin-bedded salt cavern was performed by implementing the proposed model in the finite difference software FLAC3D. The volume convergence, the damage and failure propagation of the cavern, as well as the strain rate of the salt around the cavern, were evaluated and discussed in detail. Finally, based on the simulation study, a 7-MPa minimum internal pressure is suggested to ensure the structural stability of the Jintan bedded salt cavern. The results obtained from these investigations provide the necessary input for the design and construction of the cavern project.

  16. Evaluation of Lunar Prodigy dual-energy X-ray absorptiometry for assessing body composition in healthy persons and patients by comparison with the criterion 4-component model.

    Science.gov (United States)

    Williams, Jane E; Wells, Jonathan C K; Wilson, Catherine M; Haroun, Dalia; Lucas, Alan; Fewtrell, Mary S

    2006-05-01

    Dual-energy X-ray absorptiometry (DXA) is widely used to assess body composition in research and clinical practice. Several studies have evaluated its accuracy in healthy persons; however, little attention has been directed to the same issue in patients. The objective was to compare the accuracy of the Lunar Prodigy DXA for body-composition analysis with that of the reference 4-component (4C) model in healthy subjects and in patients with 1 of 3 disease states. A total of 215 subjects aged 5.0-21.3 y (n = 122 healthy nonobese subjects, n = 55 obese patients, n = 26 cystic fibrosis patients, and n = 12 patients with glycogen storage disease). Fat mass (FM), fat-free mass (FFM), and weight were measured by DXA and the 4C model. The accuracy of DXA-measured body-composition outcomes differed significantly between groups. Factors independently predicting bias in weight, FM, FFM, and percentage body fat in multivariate models included age, sex, size, and disease state. Biases in FFM were not mirrored by equivalent opposite biases in FM because of confounding biases in weight. The bias of DXA varies according to the sex, size, fatness, and disease state of the subjects, which indicates that DXA is unreliable for patient case-control studies and for longitudinal studies of persons who undergo significant changes in nutritional status between measurements. A single correction factor cannot adjust for inconsistent biases.

  17. Risk acceptance criterion for tanker oil spill risk reduction measures.

    Science.gov (United States)

    Psarros, George; Skjong, Rolf; Vanem, Erik

    2011-01-01

    This paper is aimed at investigating whether there is ample support for the view that the acceptance criterion for evaluating measures for prevention of oil spills from tankers should be based on cost-effectiveness considerations. One such criterion can be reflected by the Cost of Averting a Tonne of oil Spilt (CATS) whereas its target value is updated by elaborating the inherent uncertainties of oil spill costs and establishing a value for the criterion's assurance factor. To this end, a value of $80,000/t is proposed as a sensible CATS criterion and the proposed value for the assurance factor F=1.5 is supported by the retrieved Protection and Indemnity (P&I) Clubs' Annual Reports. It is envisaged that this criterion would allow the conversion of direct and indirect costs into a non-market value for the optimal allocation of resources between the various parties investing in shipping. A review of previous cost estimation models on oil spills is presented and a probability distribution (log-normal) is fitted on the available oil spill cost data, where it should be made abundantly clear that the mean value of the distribution is used for deriving the updated CATS criterion value. However, the difference between the initial and the updated CATS criterion in the percentiles of the distribution is small. It is found through the current analysis that results are partly lower than the predicted values from the published estimation models. The costs are also found to depend on the type of accident, which is in agreement with the results of previous studies. Other proposals on acceptance criteria are reviewed and it is asserted that the CATS criterion can be considered as the best candidate. Evidence is provided that the CATS approach is practical and meaningful by including examples of successful applications in actual risk assessments. Finally, it is suggested that the criterion may be refined subject to more readily available cost data and experience gained from future

  18. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  19. An improved swarm optimization for parameter estimation and biological model selection.

    Directory of Open Access Journals (Sweden)

    Afnizanfaizal Abdullah

    Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete

  20. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  1. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  2. An Operational Definition of the Emergence Criterion

    Science.gov (United States)

    Pallotti, Gabriele

    2007-01-01

    Although acquisition criteria are a fundamental issue for SLA research, they have not always been adequately defined or elaborated in the literature. This article critically scrutinizes one such criterion, the emergence criterion, proposing an explicit, operational definition. After discussing emergence as a theoretical construct, the article…

  3. Development of failure criterion for Kevlar-epoxy fabric laminates

    Science.gov (United States)

    Tennyson, R. C.; Elliott, W. G.

    1984-01-01

    The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.

  4. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  5. Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.

    2006-03-31

    One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.

  6. Lost-sales inventory systems with a service level criterion

    NARCIS (Netherlands)

    Bijvank, Marco; Vis, Iris F. A.

    2012-01-01

    Competitive retail environments are characterized by service levels and lost sales in case of excess demand. We contribute to research on lost-sales models with a service level criterion in multiple ways. First, we study the optimal replenishment policy for this type of inventory system as well as

  7. Evaluation of minimum residual pressure as design criterion for ...

    African Journals Online (AJOL)

    ... as the design criterion for minimum residual pressure in water distribution systems. However, the theoretical peak demand in many systems has increased beyond the point where minimum residual pressure exceeds 24 m – at least according to hydraulic models. Additions of customers to existing supply systems have led ...

  8. Numerical and Experimental Validation of a New Damage Initiation Criterion

    NARCIS (Netherlands)

    Sadhinoch, M.; Atzema, E.H.; Perdahcioglu, E.S.; Van Den Boogaard, A.H.

    2017-01-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this

  9. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    OpenAIRE

    Wu, Chung-Min; Hsieh, Ching-Lin; Chang, Kuei-Lun

    2013-01-01

    The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM) model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP) is then used to obtain their weights. To avoid calculation and additional pairwise compa...

  10. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  11. Modeling and Analysis of Supplier Selection Method Using ...

    African Journals Online (AJOL)

    However, in these parts of the world the application of tools and models for supplier selection problem is yet to surface and the banking and finance industry here in Ethiopia is no exception. Thus, the purpose of this research was to address supplier selection problem through modeling and application of analytical hierarchy ...

  12. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...

  13. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  14. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  15. Ground-water transport model selection and evaluation guidelines

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1983-01-01

    Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references

  16. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  17. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  18. The qualitative criterion of transient angle stability

    DEFF Research Database (Denmark)

    Lyu, R.; Xue, Y.; Xue, F.

    2015-01-01

    In almost all the literatures, the qualitative assessment of transient angle stability extracts the angle information of generators based on the swing curve. As the angle (or angle difference) of concern and the threshold value rely strongly on the engineering experience, the validity and robust...... of these criterions are weak. Based on the stability mechanism from the extended equal area criterion (EEAC) theory and combining with abundant simulations of real system, this paper analyzes the criterions in most literatures and finds that the results could be too conservative or too optimistic. It is concluded...

  19. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  20. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  1. Modeling shape selection of buckled dielectric elastomers

    Science.gov (United States)

    Langham, Jacob; Bense, Hadrien; Barkley, Dwight

    2018-02-01

    A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.

  2. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  3. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  4. An analysis of the stratified and nonstartified flow transition criterion in horizontal/inclined pipes

    International Nuclear Information System (INIS)

    Sung, Chang Kyung

    1996-02-01

    The present studies are developed to present the two-step approach which is used in extending the phase-plane method and the hyperbolicity breaking concept for instability analyses of one-dimensional two-phase flow equations to the derivations of the new criterion for the stratified and non-stratified flow transition (e.g., onset of slugging), the critical flow condition criterion, and the flooding criterion. In the first step, more general forms for the onset of slugging criterion, the critical flow condition criterion, and the flooding criterion are derived based on the nonlinear analysis: more specifically, analyses of the phase-plane method and hyperbolicity breaking concept of the 'inflected nodes' (neutral stability conditions) and 'parallel lines' in the topological patterns of the linear system obtained from the transient one-dimensional two-phase flow equations of a two-fluid model. In the second step, through the introduction of simplifications and incorporation of a parameter (at the onset of slugging criterion) or an empirical constant (at the flooding criterion) into the general expression derived in the first step to satisfy a number of physical conditions a priori specified, new criteria for the onset of slugging and the onset of flooding have been derived. Validation of the present stratified and non-stratified flow transition criterion is achieved by comparison with the existing analytical criteria (Taitel and Dukler model and one-dimensional wave model) and experimental data from large test facilities (IFP, SINTEF, Harwell, Creare/PRC, ROSA and UPTF). The result of the validation shows good agreement at large pipe diameters and at high gas velocities. Present studies have revealed that the two-phase density ratio and pipe inclination angle are very important parameters that should be handled properly to determine the flow regime boundary correctly. Comparison between the present critical flow condition criterion with Gardner's critical flow

  5. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  6. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  7. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  8. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  9. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  10. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  11. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  12. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  13. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method

    Science.gov (United States)

    Jiang, Yuan; He, Yunxiao

    2015-01-01

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study. PMID:27217599

  14. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  15. Sensor Calibration Design Based on D-Optimality Criterion

    Directory of Open Access Journals (Sweden)

    Hajiyev Chingiz

    2016-09-01

    Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.

  16. A risk assessment model for selecting cloud service providers

    OpenAIRE

    Cayirci, Erdal; Garaga, Alexandr; Santana de Oliveira, Anderson; Roudier, Yves

    2016-01-01

    The Cloud Adoption Risk Assessment Model is designed to help cloud customers in assessing the risks that they face by selecting a specific cloud service provider. It evaluates background information obtained from cloud customers and cloud service providers to analyze various risk scenarios. This facilitates decision making an selecting the cloud service provider with the most preferable risk profile based on aggregated risks to security, privacy, and service delivery. Based on this model we ...

  17. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  18. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  19. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  20. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  1. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  2. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  3. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  4. FFTBM and primary pressure acceptance criterion

    International Nuclear Information System (INIS)

    Prosek, A.

    2004-01-01

    When thermalhydraulic computer codes are used for simulation in the area of nuclear engineering the question is how to conduct an objective comparison between the code calculation and measured data. To answer this the fast Fourier transform based method (FFTBM) was developed. When the FFTBM method was developed the acceptance criteria for primary pressure and total accuracy were set. In the recent study the FFTBM method was used for accuracy quantification of RD-14M large LOCA test B9401 calculations. The blind accuracy analysis indicated good total accuracy while the primary pressure criterion was not fulfilled. The objective of the study was therefore to investigate the reasons for not fulfilling the primary pressure acceptance criterion and the applicability of the criterion to experimental facilities simulating heavy water reactor. The results of the open quantitative analysis showed that sensitivity analysis for influence parameters provide sufficient information to judge in which calculation the accuracy of primary pressure is acceptable. (author)

  5. Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  6. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  7. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  8. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  9. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...

  10. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  11. General stability criterion for inviscid parallel flow

    International Nuclear Information System (INIS)

    Sun Liang

    2007-01-01

    Arnol'd's second stability theorem is approached from an elementary point of view. First, a sufficient criterion for stability is found analytically as either -μ 1 s ) s ) in the flow, where U s is the velocity at the inflection point, and μ 1 is the eigenvalue of Poincare's problem. Second, this criterion is generalized to barotropic geophysical flows in the β plane. And the connections between present criteria and Arnol'd's nonlinear criteria are also discussed. The proofs are completely elementary and so could be used to teach undergraduate students

  12. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  13. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  14. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  15. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  16. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  17. [On selection criteria in spatially distributed models of competition].

    Science.gov (United States)

    Il'ichev, V G; Il'icheva, O A

    2014-01-01

    Discrete models of competitors (initial population and mutants) are considered in which reproduction is set by increasing and concave function, and migration in the space consisting of a set of areas, is described by a Markov matrix. This allows the use of the theory of monotonous operators to study problems of selection, coexistence and stability. It is shown that the higher is the number of areas, more and more severe constraints of selective advantage to initial population are required.

  18. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  19. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  20. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  1. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-03-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.

  2. Less is more: an adaptive branch-site random effects model for efficient detection of episodic diversifying selection.

    Science.gov (United States)

    Smith, Martin D; Wertheim, Joel O; Weaver, Steven; Murrell, Ben; Scheffler, Konrad; Kosakovsky Pond, Sergei L

    2015-05-01

    Over the past two decades, comparative sequence analysis using codon-substitution models has been honed into a powerful and popular approach for detecting signatures of natural selection from molecular data. A substantial body of work has focused on developing a class of "branch-site" models which permit selective pressures on sequences, quantified by the ω ratio, to vary among both codon sites and individual branches in the phylogeny. We develop and present a method in this class, adaptive branch-site random effects likelihood (aBSREL), whose key innovation is variable parametric complexity chosen with an information theoretic criterion. By applying models of different complexity to different branches in the phylogeny, aBSREL delivers statistical performance matching or exceeding best-in-class existing approaches, while running an order of magnitude faster. Based on simulated data analysis, we offer guidelines for what extent and strength of diversifying positive selection can be detected reliably and suggest that there is a natural limit on the optimal parametric complexity for "branch-site" models. An aBSREL analysis of 8,893 Euteleostomes gene alignments demonstrates that over 80% of branches in typical gene phylogenies can be adequately modeled with a single ω ratio model, that is, current models are unnecessarily complicated. However, there are a relatively small number of key branches, whose identities are derived from the data using a model selection procedure, for which it is essential to accurately model evolutionary complexity. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Mercier criterion for high-β tokamaks

    International Nuclear Information System (INIS)

    Galvao, R.M.O.

    1984-01-01

    An expression, for the application of the Mercier criterion to numerical studies of diffuse high-β tokamaks (β approximatelly Σ,q approximatelly 1), which contains only leading order contributions in the high-β tokamak approximation is derived. (L.C.) [pt

  4. The Leadership Criterion in Technological Institute

    International Nuclear Information System (INIS)

    Carvalho, Marcelo Souza de; Cussa, Adriana Lourenco d'Avila; Suita, Julio Cezar

    2005-01-01

    This paper introduces the Direction's 'Decision Making Practice'. It has recently been reviewed with the merging of the beddings of the Leadership Criterion (CE-PNQ). These changes improved the control of institutional plans of action which are the result of the global performance critical analysis and other information associated with the Decision Making Practice. (author)

  5. A pellet-clad interaction failure criterion

    International Nuclear Information System (INIS)

    Howl, D.A.; Coucill, D.N.; Marechal, A.J.C.

    1983-01-01

    A Pellet-Clad Interaction (PCI) failure criterion, enabling the number of fuel rod failures in a reactor core to be determined for a variety of normal and fault conditions, is required for safety analysis. The criterion currently being used for the safety analysis of the Pressurized Water Reactor planned for Sizewell in the UK is defined and justified in this paper. The criterion is based upon a threshold clad stress which diminishes with increasing fast neutron dose. This concept is consistent with the mechanism of clad failure being stress corrosion cracking (SCC); providing excess corrodant is always present, the dominant parameter determining the propagation of SCC defects is stress. In applying the criterion, the SLEUTH-SEER 77 fuel performance computer code is used to calculate the peak clad stress, allowing for concentrations due to pellet hourglassing and the effect of radial cracks in the fuel. The method has been validated by analysis of PCI failures in various in-reactor experiments, particularly in the well-characterised power ramp tests in the Steam Generating Heavy Water Reactor (SGHWR) at Winfrith. It is also in accord with out-of-reactor tests with iodine and irradiated Zircaloy clad, such as those carried out at Kjeller in Norway. (author)

  6. An aerodynamic load criterion for airships

    Science.gov (United States)

    Woodward, D. E.

    1975-01-01

    A simple aerodynamic bending moment envelope is derived for conventionally shaped airships. This criterion is intended to be used, much like the Naval Architect's standard wave, for preliminary estimates of longitudinal strength requirements. It should be useful in tradeoff studies between speed, fineness ratio, block coefficient, structure weight, and other such general parameters of airship design.

  7. Information criterion for the categorization quality evaluation

    Directory of Open Access Journals (Sweden)

    Michail V. Svirkin

    2011-05-01

    Full Text Available The paper considers the possibility of using the variation of information function as a quality criterion for categorizing a collection of documents. The performance of the variation of information function is being examined subject to the number of categories and the sample volume of the test document collection.

  8. Ecohydrological model parameter selection for stream health evaluation.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  9. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  10. Selecting an appropriate genetic evaluation model for selection in a developing dairy sector

    NARCIS (Netherlands)

    McGill, D.M.; Mulder, H.A.; Thomson, P.C.; Lievaart, J.J.

    2014-01-01

    This study aimed to identify genetic evaluation models (GEM) to accurately select cattle for milk production when only limited data are available. It is based on a data set from the Pakistani Sahiwal progeny testing programme which includes records from five government herds, each consisting of 100

  11. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  12. An expert-based model for selecting the most suitable substrate material type for antenna circuits

    Science.gov (United States)

    AL-Oqla, Faris M.; Omar, Amjad A.

    2015-06-01

    Quality and properties of microwave circuits depend on all the circuit components. One of these components is the substrate. The process of substrate material selection is a decision-making problem that involves multicriteria with objectives that are diverse and conflicting. The aim of this work was to select the most suitable substrate material type to be used in antennas in the microwave frequency range that gives best performance and reliability of the substrate. For this purpose, a model was built to ease the decision-making that includes hierarchical alternatives and criteria. The substrate material type options considered were limited to fiberglass-reinforced epoxy laminates (FR4 εr = 4.8), aluminium (III) oxide (alumina εr = 9.6), gallium arsenide III-V compound (GaAs εr = 12.8) and PTFE composites reinforced with glass microfibers (Duroid εr = 2.2-2.3). To assist in building the model and making decisions, the analytical hierarchy process (AHP) was used. The decision-making process revealed that alumina substrate material type was the most suitable choice for the antennas in the microwave frequency range that yields best performance and reliability. In addition, both the size of the circuit and the loss tangent of the substrates were found to be the most contributing subfactors in the antenna circuit specifications criterion. Experimental assessments were conducted utilising The Expert Choice™ software. The judgments were tested and found to be precise, consistent and justifiable, and the marginal inconsistency values were found to be very narrow. A sensitivity analysis was also presented to demonstrate the confidence in the drawn conclusions.

  13. Psychometric aspects of item mapping for criterion-referenced interpretation and bookmark standard setting.

    Science.gov (United States)

    Huynh, Huynh

    2010-01-01

    Locating an item on an achievement continuum (item mapping) is well-established in technical work for educational/psychological assessment. Applications of item mapping may be found in criterion-referenced (CR) testing (or scale anchoring, Beaton and Allen, 1992; Huynh, 1994, 1998a, 2000a, 2000b, 2006), computer-assisted testing, test form assembly, and in standard setting methods based on ordered test booklets. These methods include the bookmark standard setting originally used for the CTB/TerraNova tests (Lewis, Mitzel, Green, and Patz, 1999), the item descriptor process (Ferrara, Perie, and Johnson, 2002) and a similar process described by Wang (2003) for multiple-choice licensure and certification examinations. While item response theory (IRT) models such as the Rasch and two-parameter logistic (2PL) models traditionally place a binary item at its location, Huynh has argued in the cited papers that such mapping may not be appropriate in selecting items for CR interpretation and scale anchoring.

  14. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  15. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  16. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...... many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... a sample to be choice-based or governed by a self-selection mechanism. In both cases, there is a possibility that sampling affects the estimation of a population model. It was established in the seventies how choice-based sampling affects the estimation of multinomial logit models. The thesis examines...

  17. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  18. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  20. Relative criterion for validity of a semiclassical approach to the dynamics near quantum critical points.

    Science.gov (United States)

    Wang, Qian; Qin, Pinquan; Wang, Wen-ge

    2015-10-01

    Based on an analysis of Feynman's path integral formulation of the propagator, a relative criterion is proposed for validity of a semiclassical approach to the dynamics near critical points in a class of systems undergoing quantum phase transitions. It is given by an effective Planck constant, in the relative sense that a smaller effective Planck constant implies better performance of the semiclassical approach. Numerical tests of this relative criterion are given in the XY model and in the Dicke model.

  1. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum o...

  2. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.

  3. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  4. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  5. Electron accelerators for radiation processing: Criterions of selection and exploitation

    International Nuclear Information System (INIS)

    Zimek, Zbigniew

    2001-01-01

    The progress in accelerator technology is tightly attached to the continuously advanced development in many branches of technical activity. Although the present level of accelerators development can satisfy most of the commercial requirements, this field continues to expand and improve quality by offering efficient, cheap, reliable, high average beam power commercial units. Accelerator construction must be a compromised between size, efficiency and cost with respect to the field of its application. High power accelerators have been developed to meet specific demands of flue gas treatment and other high throughput to increase the capacity of the progress and reduced unit cost of operation. Automatic control, reliability and reduced maintenance, adequate adoption to process conditions, suitable electron energy and beam power are the basic features of modern accelerator construction. Accelerators have the potential to serve as industrial radiation sources and eventually may replace the isotope sources in future. Electron beam plants can transfer much higher amounts of energy into the irradiated objects than other types of facilities including gamma plants. This provides the opportunity to construct technological lines with high capacity that are more technically and economically suitable with high throughputs, short evidence time and grate versatility

  6. Selection criterion for improved grain yields in Ethiopian durum ...

    African Journals Online (AJOL)

    The experimental material consisted of 44 indigenous durum wheat genotypes, which are randomly taken from the indigenous germplasm collections. Mean sum of squares for all the characters considered showed highly significant differences (P<0.01) indicating the presence of adequate variability. Grain yield had strong ...

  7. Bayesian Model Comparison With the g-Prior

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Cemgil, Ali Taylan

    2014-01-01

    Model comparison and selection is an important problem in many model-based signal processing applications. Often, very simple information criteria such as the Akaike information criterion or the Bayesian information criterion are used despite their shortcomings. Compared to these methods, Djuric...... demonstrate that our proposed model comparison and selection rules outperform the traditional information criteria both in terms of detecting the true model and in terms of predicting unobserved data. The simulation code is available online....

  8. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  9. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  10. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  11. Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.

    Science.gov (United States)

    Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay

    2016-01-01

    This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.

  12. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  13. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  14. On the hodological criterion for homology

    Directory of Open Access Journals (Sweden)

    Macarena eFaunes

    2015-06-01

    Full Text Available Owen’s pre-evolutionary definition of a homologue as the same organ in different animals under every variety of form and function and its redefinition after Darwin as the same trait in different lineages due to common ancestry entail the same heuristic problem: how to establish sameness. Although different criteria for homology often conflict, there is currently a generalized acceptance of gene expression as the best criterion. This gene-centered view of homology results from a reductionist and preformationist concept of living beings. Here, we adopt an alternative organismic-epigenetic viewpoint, and conceive living beings as systems whose identity is given by the dynamic interactions between their components at their multiple levels of composition. We posit that there cannot be an absolute homology criterion, and instead, homology should be inferred from comparisons at the levels and developmental stages where the delimitation of the compared trait lies. In this line, we argue that neural connectivity, i.e., the hodological criterion, should prevail in the determination of homologies between brain supra-cellular structures, such as the vertebrate pallium.

  15. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  16. Locally linear representation Fisher criterion based tumor gene expressive data classification.

    Science.gov (United States)

    Li, Bo; Tian, Bei-Bei; Zhang, Xiao-Long; Zhang, Xiao-Ping

    2014-10-01

    Tumor gene expressive data are characterized by a large amount of genes with only a small amount of observations, which always appear with high dimensionality. So it is necessary to reduce the dimensionality before identifying their genre. In this paper, a discriminant manifold learning method, named locally linear representation Fisher criterion (LLRFC), is applied to extract features from tumor gene expressive data. In LLRFC, an inter-class graph and an intra-class graph are constructed based on their genre information, where any tumor gene expressive data in the inter-class graph should select k nearest neighbors with different class labels and in the intra-class graph the k nearest neighbors for any tumor gene expressive data must be sampled from those with the same class. And then the locally least linear reconstruction is introduced to optimize the corresponding weights in both graphs. Moreover, a Fisher criterion is modeled to explore a low dimensional subspace where the reconstruction errors in the inter-class graph can be maximized and the reconstruction errors in the intra-class graph can be minimized, simultaneously. Experiments on some benchmark tumor gene expressive data have been conducted with some related algorithms, by which the proposed LLRFC has been validated to be efficient. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  18. Broken selection rule in the quantum Rabi model.

    Science.gov (United States)

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-06-07

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  19. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  20. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  1. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  2. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  3. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  4. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia...

  5. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    Science.gov (United States)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  6. DATA ANALYSIS BY FORMAL METHODS OF ESTIMATION OF INDEXES OF RATING CRITERION IN PROCESS OF ACCUMULATION OF DATA ABOUT WORKING OF THE TEACHING STAFF

    Directory of Open Access Journals (Sweden)

    Alexey E. Fedoseev

    2014-01-01

    Full Text Available The article considers the development of formal methods of assessing the rating criterion exponents. The article deals with the mathematical model, which allows to connect together quantitative rating criterion characteristics, measured in various scales, with intuitive idea of them. The solution to the problem of rating criterion estimation is proposed.

  7. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  8. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    Science.gov (United States)

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  10. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  11. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  12. Models of speciation by sexual selection on polygenic traits

    OpenAIRE

    Lande, Russell

    1981-01-01

    The joint evolution of female mating preferences and secondary sexual characters of males is modeled for polygamous species in which males provide only genetic material to the next generation and females have many potential mates to choose among. Despite stabilizing natural selection on males, various types of mating preferences may create a runaway process in which the outcome of phenotypic evolution depends critically on the genetic variation parameters and initial conditions of a populatio...

  13. A Model of Social Selection and Successful Altruism

    Science.gov (United States)

    1989-10-07

    D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the

  14. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  15. A simplified wave enhancement criterion for moving extreme events

    Science.gov (United States)

    Kudryavtsev, Vladimir; Golubkin, Pavel; Chapron, Bertrand

    2015-11-01

    An analytical model is derived to efficiently describe the wave energy distribution along the main transects of a moving extreme weather event. The model essentially builds on a generalization of the self-similar wave growth model and the assumption of a strongly dominant single spectral mode in a given quadrant of the storm. The criterion to anticipate wave enhancement with the generation of trapped abnormal waves defined as gr/ur2≈cT>(ur/V>)1/q, with r, u, and V, radial distance, average sustained wind speed, and translation velocity, respectively. Constants q and cT follow the fetch-law definitions. If forced during a sufficient time scale interval, also defined from this generalized self-similar wave growth model, waves can be trapped and large amplification of the wave energy will occur in the front-right storm quadrant. Remarkably, the group velocity and corresponding wavelength of outrunning wave systems will become wind speed independent and solely related to the translating velocity. The resulting significant wave height also only weakly depends on wind speed, and more strongly on the translation velocity. Compared to altimeter satellite measurements, the proposed analytical solutions for the wave energy distribution demonstrate convincing agreement. As analytically developed, the wave enhancement criterion can provide a rapid evaluation to document the general characteristics of each storm, especially the expected wavefield asymmetry.

  16. Physical and Constructive (Limiting) Criterions of Gear Wheels Wear

    Science.gov (United States)

    Fedorov, S. V.

    2018-01-01

    We suggest using a generalized model of friction - the model of elastic-plastic deformation of the body element, which is located on the surface of the friction pairs. This model is based on our new engineering approach to the problem of friction-triboergodynamics. Friction is examined as transformative and dissipative process. Structural-energetic interpretation of friction as a process of elasto-plastic deformation and fracture contact volumes is proposed. The model of Hertzian (heavy-loaded) friction contact evolution is considered. The least wear particle principle is formulated. It is mechanical (nano) quantum. Mechanical quantum represents the least structural form of solid material body in conditions of friction. It is dynamic oscillator of dissipative friction structure and it can be examined as the elementary nanostructure of metal’s solid body. At friction in state of most complete evolution of elementary tribosystem (tribocontact) all mechanical quanta (subtribosystems) with the exception of one, elasticity and reversibly transform energy of outer impact (mechanic movement). In these terms only one mechanical quantum is the lost – standard of wear. From this position we can consider the physical criterion of wear and the constructive (limiting) criterion of gear teeth and other practical examples of tribosystems efficiency with new tribology notion – mechanical (nano) quantum.

  17. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  18. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  19. The alternative DSM-5 personality disorder traits criterion

    DEFF Research Database (Denmark)

    Bach, Bo; Maples-Keller, Jessica L; Bo, Sune

    2016-01-01

    The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013a) offers an alternative model for Personality Disorders (PDs) in Section III, which consists in part of a pathological personality traits criterion measured...... with the Personality Inventory for DSM-5 (PID-5). The PID-5 selfreport instrument currently exists in the original 220-item form, a short 100-item form, and a brief 25-item form. For clinicians and researchers, the choice of a particular PID- 5 form depends on feasibility, but also reliability and validity. The goal...

  20. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  1. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  2. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  3. A criterion for nuclear safety assessment

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1975-01-01

    The criterion presented has been developed extrapolating to the nuclear safety field the ICRP basic concepts and philosophies on radiological protection. The criterion postulates the use of a hyperbolic control curve basically similar to the one proposed by F.R. Farmer for probabilistic evaluations. The postulated control curve differs from Farmer's curve in both application range and characteristics of its mathematical function. A range of application is proposed from a minimum severity level (the 131 I authorized discharge limit) to a severity level as large as the reactor's 131 I inventory. The exponent value of the proposed hyperbolic function is variable with both the siting and 131 I inventory of the reactor, and it also changes with the extrapolated radioprotection concept, as considered. The methodology to evaluate the control curve exponent is also presented. Three evaluation methods are described with the following objectives: (a) to limit expectation of individual dose in order to limit individual risk; (b) to limit expectation of collective dose in order to limit population detriment to justifiable levels; and (c) to optimize the installation in order to obtain a dose expectation as low as readily achievable. Three figures present the control curve exponent plotted versus dosimetric factor for 10 6 , 10 7 and 10 8 Ci of 131 I inventories. The following conclusions can be deduced: (a) the expectation of individual dose changes very little with different inventories; (b) in sites where the collective dose per unit of activity released is high, only large reactors are justifiable; and (c) optimization analysis is generally less restrictive than the justification one. Finally, a criterion is suggested for limitation of collective dose commitment, to limit the future dose rate arising from accidents that could occur at present

  4. Two novel synchronization criterions for a unified chaotic system

    International Nuclear Information System (INIS)

    Tao Chaohai; Xiong Hongxia; Hu Feng

    2006-01-01

    Two novel synchronization criterions are proposed in this paper. It includes drive-response synchronization and adaptive synchronization schemes. Moreover, these synchronization criterions can be applied to a large class of chaotic systems and are very useful for secure communication

  5. Novel global robust stability criterion for neural networks with delay

    International Nuclear Information System (INIS)

    Singh, Vimal

    2009-01-01

    A novel criterion for the global robust stability of Hopfield-type interval neural networks with delay is presented. An example illustrating the improvement of the present criterion over several recently reported criteria is given.

  6. Systems interaction and single failure criterion

    International Nuclear Information System (INIS)

    1981-01-01

    This report documents the results of a six-month study to evaluate the ongoing research programs of the U.S. Nuclear Regulatory Commission (NRC) and U.S. commercial nuclear station owners which address the safety significance of systems interaction and the regulatory adequacy of the single failure criterion. The evaluation of system interactions provided is the initial phase of a more detailed study leading to the development and application of methodology for quantifying the relative safety of operating nuclear plants. (Auth.)

  7. Early Stop Criterion from the Bootstrap Ensemble

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan; Fog, Torben L.

    1997-01-01

    This paper addresses the problem of generalization error estimation in neural networks. A new early stop criterion based on a Bootstrap estimate of the generalization error is suggested. The estimate does not require the network to be trained to the minimum of the cost function, as required...... by other methods based on asymptotic theory. Moreover, in contrast to methods based on cross-validation which require data left out for testing, and thus biasing the estimate, the Bootstrap technique does not have this disadvantage. The potential of the suggested technique is demonstrated on various time...

  8. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  9. Selecting global climate models for regional climate change studies.

    Science.gov (United States)

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  10. Selecting an Appropriate Upscaled Reservoir Model Based on Connectivity Analysis

    Directory of Open Access Journals (Sweden)

    Preux Christophe

    2016-09-01

    Full Text Available Reservoir engineers aim to build reservoir models to investigate fluid flows within hydrocarbon reservoirs. These models consist of three-dimensional grids populated by petrophysical properties. In this paper, we focus on permeability that is known to significantly influence fluid flow. Reservoir models usually encompass a very large number of fine grid blocks to better represent heterogeneities. However, performing fluid flow simulations for such fine models is extensively CPU-time consuming. A common practice consists in converting the fine models into coarse models with less grid blocks: this is the upscaling process. Many upscaling methods have been proposed in the literature that all lead to distinct coarse models. The problem is how to choose the appropriate upscaling method. Various criteria have been established to evaluate the information loss due to upscaling, but none of them investigate connectivity. In this paper, we propose to first perform a connectivity analysis for the fine and candidate coarse models. This makes it possible to identify shortest paths connecting wells. Then, we introduce two indicators to quantify the length and trajectory mismatch between the paths for the fine and the coarse models. The upscaling technique to be recommended is the one that provides the coarse model for which the shortest paths are the closest to the shortest paths determined for the fine model, both in terms of length and trajectory. Last, the potential of this methodology is investigated from two test cases. We show that the two indicators help select suitable upscaling techniques as long as gravity is not a prominent factor that drives fluid flows.

  11. Bioeconomic model and selection indices in Aberdeen Angus cattle.

    Science.gov (United States)

    Campos, G S; Braccini Neto, J; Oaigen, R P; Cardoso, F F; Cobuci, J A; Kern, E L; Campos, L T; Bertoli, C D; McManus, C M

    2014-08-01

    A bioeconomic model was developed to calculate economic values for biological traits in full-cycle production systems and propose selection indices based on selection criteria used in the Brazilian Aberdeen Angus genetic breeding programme (PROMEBO). To assess the impact of changes in the performance of the traits on the profit of the production system, the initial values ​​of the traits were increased by 1%. The economic values for number of calves weaned (NCW) and slaughter weight (SW) were, respectively, R$ 6.65 and R$ 1.43/cow/year. The selection index at weaning showed a 44.77% emphasis on body weight, 14.24% for conformation, 30.36% for early maturing and 10.63% for muscle development. The eighteen-month index showed emphasis of 77.61% for body weight, 4.99% for conformation, 11.09% for early maturing, 6.10% for muscle development and 0.22% for scrotal circumference. NCW showed highest economic impact, and SW had important positive effect on the economics of the production system. The selection index proposed can be used by breeders and should contribute to greater profitability. © 2014 Blackwell Verlag GmbH.

  12. A proposed risk acceptance criterion for nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Mehta, K.

    1985-06-01

    The need to establish a radiological protection criterion that applies specifically to disposal of high level nuclear fuel wastes arises from the difficulty of applying the present ICRP recommendations. These recommendations apply to situations in which radiological detriment can be actively controlled, while a permanent waste disposal facility is meant to operate without the need for corrective actions. Also, the risks associated with waste disposal depend on events and processes that have various probabilities of occurrence. In these circumstances, it is not suitable to apply standards that are based on a single dose limit as in the present ICRP recommendations, because it will generally be possible to envisage events, perhaps rare, that would lead to doses above any selected limit. To overcome these difficulties, it is proposed to base a criterion for acceptability on a set of dose values and corresponding limiting values of probabilities; this set of values constitutes a risk-limit line. A risk-limit line suitable for waste disposal is proposed that has characteristics consistent with the basic philosophy of the ICRP and UNSCEAR recommendations, and is based on levels on natural background radiation

  13. Maximum Correntropy Criterion for Robust Face Recognition.

    Science.gov (United States)

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang

    2011-08-01

    In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

  14. Use of the Niyama criterion to predict porosity of the mushy zone with deformation

    Directory of Open Access Journals (Sweden)

    S. Polyakov

    2011-10-01

    Full Text Available The article presents new results on the use of the Niyama criterion to estimate porosity appearance in castings under hindered shrinkage. The effect of deformation of the mushy zone on filtration is shown. A new form of the Niyama criterion accounting for the hindered shrinkage and the range of deformation localization has been obtained. The results of this study are illustrated by the examp le of the Niyama criterion calculated for Al-Cu alloys under different diffusion conditions of solidification and rate of deformation in the mushy zone. Derived equations can be used in a mathematical model of the casting solidification as well as for interpretation of the simulation results of casting solidification under hindered shrinkage. The presented study resulted in a new procedure of using the Niyama criterion under mushy zone deformation.

  15. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  16. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  17. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  18. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    Science.gov (United States)

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  19. State selection in Markov models for panel data with application to psoriatic arthritis.

    Science.gov (United States)

    Thom, Howard H Z; Jackson, Christopher H; Commenges, Daniel; Sharples, Linda D

    2015-07-20

    Markov multistate models in continuous-time are commonly used to understand the progression over time of disease or the effect of treatments and covariates on patient outcomes. The states in multistate models are related to categorisations of the disease status, but there is often uncertainty about the number of categories to use and how to define them. Many categorisations, and therefore multistate models with different states, may be possible. Different multistate models can show differences in the effects of covariates or in the time to events, such as death, hospitalisation, or disease progression. Furthermore, different categorisations contain different quantities of information, so that the corresponding likelihoods are on different scales, and standard, likelihood-based model comparison is not applicable. We adapt a recently developed modification of Akaike's criterion, and a cross-validatory criterion, to compare the predictive ability of multistate models on the information which they share. All the models we consider are fitted to data consisting of observations of the process at arbitrary times, often called 'panel' data. We develop an implementation of these criteria through Hidden Markov models and apply them to the comparison of multistate models for the Health Assessment Questionnaire score in psoriatic arthritis. This procedure is straightforward to implement in the R package 'msm'. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  1. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  2. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  3. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  4. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  5. Variable Selection in Model-based Clustering: A General Variable Role Modeling

    OpenAIRE

    Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure

    2008-01-01

    The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...

  6. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  7. Direction selectivity in a model of the starburst amacrine cell.

    Science.gov (United States)

    Tukker, John J; Taylor, W Rowland; Smith, Robert G

    2004-01-01

    The starburst amacrine cell (SBAC), found in all mammalian retinas, is thought to provide the directional inhibitory input recorded in On-Off direction-selective ganglion cells (DSGCs). While voltage recordings from the somas of SBACs have not shown robust direction selectivity (DS), the dendritic tips of these cells display direction-selective calcium signals, even when gamma-aminobutyric acid (GABAa,c) channels are blocked, implying that inhibition is not necessary to generate DS. This suggested that the distinctive morphology of the SBAC could generate a DS signal at the dendritic tips, where most of its synaptic output is located. To explore this possibility, we constructed a compartmental model incorporating realistic morphological structure, passive membrane properties, and excitatory inputs. We found robust DS at the dendritic tips but not at the soma. Two-spot apparent motion and annulus radial motion produced weak DS, but thin bars produced robust DS. For these stimuli, DS was caused by the interaction of a local synaptic input signal with a temporally delayed "global" signal, that is, an excitatory postsynaptic potential (EPSP) that spread from the activated inputs into the soma and throughout the dendritic tree. In the preferred direction the signals in the dendritic tips coincided, allowing summation, whereas in the null direction the local signal preceded the global signal, preventing summation. Sine-wave grating stimuli produced the greatest amount of DS, especially at high velocities and low spatial frequencies. The sine-wave DS responses could be accounted for by a simple mathematical model, which summed phase-shifted signals from soma and dendritic tip. By testing different artificial morphologies, we discovered DS was relatively independent of the morphological details, but depended on having a sufficient number of inputs at the distal tips and a limited electrotonic isolation. Adding voltage-gated calcium channels to the model showed that their

  8. Parametric pattern selection in a reaction-diffusion model.

    Directory of Open Access Journals (Sweden)

    Michael Stich

    Full Text Available We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.

  9. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  10. Modeling Knowledge Resource Selection in Expert Librarian Search

    Science.gov (United States)

    KAUFMAN, David R.; MEHRYAR, Maryam; CHASE, Herbert; HUNG, Peter; CHILOV, Marina; JOHNSON, Stephen B.; MENDONCA, Eneida

    2011-01-01

    Providing knowledge at the point of care offers the possibility for reducing error and improving patient outcomes. However, the vast majority of physician’s information needs are not met in a timely fashion. The research presented in this paper models an expert librarian’s search strategies as it pertains to the selection and use of various electronic information resources. The 10 searches conducted by the librarian to address physician’s information needs, varied in terms of complexity and question type. The librarian employed a total of 10 resources and used as many as 7 in a single search. The longer term objective is to model the sequential process in sufficient detail as to be able to contribute to the development of intelligent automated search agents. PMID:19380912

  11. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  12. Nash equilibrium and multi criterion aerodynamic optimization

    Science.gov (United States)

    Tang, Zhili; Zhang, Lianhe

    2016-06-01

    Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.

  13. Application of the single failure criterion

    International Nuclear Information System (INIS)

    1990-01-01

    In order to present further details on the application and interpretation and on the limitations of individual concepts in the NUSS Codes and Safety Guides, a series of Safety Practice publications have been initiated. It is hoped that many Member States will be able to benefit from the experience presented in these books. The present publication will be useful not only to regulators but also to designers and could be particularly helpful in the interpretation of cases which fall on the borderline between the two areas. It should assist in clarifying, by way of examples, many of the concepts and implementation methods. It also describes some of the limitations involved. The book addresses a specialized topic and it is recommended that it be used together with the other books in the Safety Series. During the development of this publication the actual practices of all countries with major reactor programmes has been taken into account. An interpretation of the relevant text of the Design Code is given in the light of these national practices. The criterion is put into perspective with the general reliability requirements in which it is also embedded in the Design Code. Its relation to common cause and other multiple failure cases and also to the temporary disengagement of components in systems important to safety is clarified. Its use and its limitations are thus explained in the context of reliability targets for systems performance. The guidance provided applies to all reactor systems and would be applicable even to systems not in nuclear power plants. But since this publication was developed to give an interpretation of a specific requirement of the Design Code, the broader applicability is not explicitly claimed. The Design Code lists three cases for which compliance with the criterion may not be justified. The present publication assists in the more precise and practical identification of those cases. 9 figs, 1 tab

  14. Zero mass field quantization and Kibble's long-range force criterion for the Goldstone theorem

    International Nuclear Information System (INIS)

    Wright, S.H.

    1981-01-01

    The central theme of the dissertation is an investigation of the long-range force criterion used by Kibble in his discussion of the Goldstone Theorem. This investigation is broken up into the following sections: I. Introduction. Spontaneous symmetry breaking, the Goldstone Theorem and the conditions under which it holds are discussed. II. Massless Wave Expansions. In order to make explicit calculations of the operator commutators used in applying Kibble's criterion, it is necessary to work out the operator expansions for a massless field. Unusual results are obtained which include operators corresponding to classical macroscopic field modes. III. The Kibble Criterion for Simple Models Exhibiting Spontaneously Broken Symmetries. The results of the previous section are applied to simple models with spontaneously broken symmetries, namely, the real scalar massless field and the Goldstone model without gauge coupling. IV. The Higgs Mechanism in Classical Field Theory. It is shown that the Higgs Mechanism has a simple interpretation in terms of classical field theory, namely, that it arises from a derivative coupling term between the Goldstone fields and the gauge fields. V. The Higgs Mechanism and Kibble's Criterion. This section draws together the material discussed in sections II to IV. Explicit calculations are made to evaluate Kibble's criterion on a Goldstone-Higgs type of model in the Coulomb gauge. It is found, as expected, that the criterion is not met, but not for reasons relating to the range of the mediating force. By referring to the findings of sections III and IV, it is concluded that the common denominator underlying both the Higgs Mechanism and the failure of Kibble's criterion is a structural aspect of the field equations: derivative coupling between fields

  15. Cliff-edge model of obstetric selection in humans.

    Science.gov (United States)

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  16. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  17. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  18. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  19. Multiaxial fatigue criterion based on parameters from torsion and axial S-N curve

    Directory of Open Access Journals (Sweden)

    M. Margetin

    2016-07-01

    Full Text Available Multiaxial high cycle fatigue is a topic that concerns nearly all industrial domains. In recent years, a great deal of recommendations how to address problems with multiaxial fatigue life time estimation have been made and a huge progress in the field has been achieved. Until now, however, no universal criterion for multiaxial fatigue has been proposed. Addressing this situation, this paper offers a design of a new multiaxial criterion for high cycle fatigue. This criterion is based on critical plane search. Damage parameter consists of a combination of normal and shear stresses on a critical plane (which is a plane with maximal shear stress amplitude. Material parameters used in proposed criterion are obtained from torsion and axial S-N curves. Proposed criterion correctly calculates life time for boundary loading condition (pure torsion and pure axial loading. Application of proposed model is demonstrated on biaxial loading and the results are verified with testing program using specimens made from S355 steel. Fatigue material parameters for proposed criterion and multiple sets of data for different combination of axial and torsional loading have been obtained during the experiment.

  20. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    to examine the null hypothesis that codon usage is due to mutation bias alone, not influenced by natural selection. Application of the test to the mammalian data led to rejection of the null hypothesis in most genes, suggesting that natural selection may be a driving force in the evolution of synonymous......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  1. Building a response criterion for pediatric multidisciplinary obesity intervention success based on combined benefits.

    Science.gov (United States)

    Nardo Junior, Nelson; Bianchini, Josiane Aparecida Alves; da Silva, Danilo Fernandes; Ferraro, Zachary M; Lopera, Carlos Andres; Antonini, Vanessa Drieli Seron

    2018-03-19

    To propose a response criterion for analyzing the intervention success by verifying patient outcomes after a multidisciplinary obesity treatment program in Brazilian children and adolescents. Obese children and adolescents (n = 103) completed a 16-week multidisciplinary intervention (IG) and were compared to the control group (CG) (n = 66). A cluster of parameters (e.g. total domain of HRQoL; BMI z-score; cardiorespiratory fitness; body mass; waist circumference; fat mass; lean mass) were measured pre and post-intervention, and the sum of the median percentage variation and the percentile 25 and 75 were used from IG and CG to determine the responsiveness to the program. We are proposing four ranges in which children and adolescents may be classified after the intervention: (1) CG percentile 50 values or lower are considered non-responsive to the intervention, (2) values greater than CG percentile 50 but lower than IG percentile 50 are considered slightly responsive, (3) values greater than IG percentile 50 but lower than IG percentile 75 were considered as moderately responsive, and (4) values greater than IG percentile 75 were considered very responsive. This criterion may serve as a complementary tool that can be employed to monitor the response to this model of multidisciplinary intervention. What is Known: • The effectiveness of multidisciplinary obesity interventions is usually determined by comparing changes in selected outcomes in the intervention versus versus control group. • There is no consensus about what should be assessed before and after the intervention program, which makes difficult to compare different programs and to determine their rate of responsiveness. What is New: • This study proposes a response criteria to pediatric obesity interventions following a similar model compared to ours based on key variables that presents low cost and high applicability in different settings.

  2. Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study.

    Science.gov (United States)

    Austin, Peter C

    2008-10-01

    Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.

  3. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  4. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  5. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  6. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  8. Modeling heat stress effect on Holstein cows under hot and dry conditions: selection tools.

    Science.gov (United States)

    Carabaño, M J; Bachagha, K; Ramón, M; Díaz, C

    2014-12-01

    component, a constant term that is not affected by temperature, representing from 64% of the variation for SCS to 91% of the variation for milk. The second component, showing a flat pattern at intermediate temperatures and increasing or decreasing slopes for the extremes, gathered 15, 11, and 24% of the variation for fat and protein yield and SCS, respectively. This component could be further evaluated as a selection criterion for heat tolerance independently of the production level. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Discussion on verification criterion and method of human factors engineering for nuclear power plant controller

    International Nuclear Information System (INIS)

    Yang Hualong; Liu Yanzi; Jia Ming; Huang Weijun

    2014-01-01

    In order to prevent or reduce human error and ensure the safe operation of nuclear power plants, control device should be verified from the perspective of human factors engineering (HFE). The domestic and international human factors engineering guidelines about nuclear power plant controller were considered, the verification criterion and method of human factors engineering for nuclear power plant controller were discussed and the application examples were provided for reference in this paper. The results show that the appropriate verification criterion and method should be selected to ensure the objectivity and accuracy of the conclusion. (authors)

  10. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation......We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...

  11. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  12. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  13. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  14. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  15. Estimating a dynamic model of sex selection in China.

    Science.gov (United States)

    Ebenstein, Avraham

    2011-05-01

    High ratios of males to females in China, which have historically concerned researchers (Sen 1990), have increased in the wake of China's one-child policy, which began in 1979. Chinese policymakers are currently attempting to correct the imbalance in the sex ratio through initiatives that provide financial compensation to parents with daughters. Other scholars have advocated a relaxation of the one-child policy to allow more parents to have a son without engaging in sex selection. In this article, I present a model of fertility choice when parents have access to a sex-selection technology and face a mandated fertility limit. By exploiting variation in fines levied in China for unsanctioned births, I estimate the relative price of a son and daughter for mothers observed in China's census data (1982-2000). I find that a couple's first son is worth 1.42 years of income more than a first daughter, and the premium is highest among less-educated mothers and families engaged in agriculture. Simulations indicate that a subsidy of 1 year of income to families without a son would reduce the number of "missing girls" by 67% but impose an annual cost of 1.8% of Chinese gross domestic product (GDP). Alternatively, a three-child policy would reduce the number of "missing girls" by 56% but increase the fertility rate by 35%.

  16. A new multiobjective performance criterion used in PID tuning optimization algorithms.

    Science.gov (United States)

    Sahib, Mouayad A; Ahmed, Bestoun S

    2016-01-01

    In PID controller design, an optimization algorithm is commonly employed to search for the optimal controller parameters. The optimization algorithm is based on a specific performance criterion which is defined by an objective or cost function. To this end, different objective functions have been proposed in the literature to optimize the response of the controlled system. These functions include numerous weighted time and frequency domain variables. However, for an optimum desired response it is difficult to select the appropriate objective function or identify the best weight values required to optimize the PID controller design. This paper presents a new time domain performance criterion based on the multiobjective Pareto front solutions. The proposed objective function is tested in the PID controller design for an automatic voltage regulator system (AVR) application using particle swarm optimization algorithm. Simulation results show that the proposed performance criterion can highly improve the PID tuning optimization in comparison with traditional objective functions.

  17. Selection Ideal Coal Suppliers of Thermal Power Plants Using the Matter-Element Extension Model with Integrated Empowerment Method for Sustainability

    Directory of Open Access Journals (Sweden)

    Zhongfu Tan

    2014-01-01

    Full Text Available In order to reduce thermal power generation cost and improve its market competitiveness, considering fuel quality, cost, creditworthiness, and sustainable development capacity factors, this paper established the evaluation system for coal supplier selection of thermal power and put forward the coal supplier selection strategies for thermal power based on integrated empowering and ideal matter-element extension models. On the one hand, the integrated empowering model can overcome the limitations of subjective and objective methods to determine weights, better balance subjective, and objective information. On the other hand, since the evaluation results of the traditional element extension model may fall into the same class and only get part of the order results, in order to overcome this shortcoming, the idealistic matter-element extension model is constructed. It selects the ideal positive and negative matter-elements classical field and uses the closeness degree to replace traditional maximum degree of membership criterion and calculates the positive or negative distance between the matter-element to be evaluated and the ideal matter-element; then it can get the full order results of the evaluation schemes. Simulated and compared with the TOPSIS method, Romania selection method, and PROMETHEE method, numerical example results show that the method put forward by this paper is effective and reliable.

  18. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  19. Analytical Modelling Of Milling For Tool Design And Selection

    Science.gov (United States)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-05-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools.

  20. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  1. A discussion on the time criterion of the 50m radio-telescope

    Science.gov (United States)

    Ni, G. R.; Xu, L. P.; He, K. Y.

    2006-07-01

    It analyzes the influence of the time-frequency properties of atom clock on time-keeping error by quantity, and estimates the error of related physical measure caused by the time-keeping properties of atom time-frequency criterion---time resolution. The comparison table of modern practical atom clock's performance index is given. The index of transferring and comparing standard time-frequency signal by modern radio means is also listed. The high-precision time-frequency criterion's important status in the fields of exploration of earth and astrospace, VLBI and the timing application of millisecond pulsars are also discussed. The high-precision requirement and selection principle, which are applied to time criterion by ``Chang E-I'' moon-exploration task, and the 50m radio telescope's scientific aim are set forth. In order to achieve its scientific aim and task, the time-frequency criterion, which is correspondent to its research goal and high-standard as soon as possible, must be set up, so that the high-quality data and high-efficiency research achievement can be attained. The problem of what time criterion should be set up is discussed, and the primary suggestions are put forward as well.

  2. Selection of hydrologic modeling approaches for climate change assessment: A comparison of model scale and structures

    Science.gov (United States)

    Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won

    2012-09-01

    SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.

  3. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA

  4. Developing a Green Supplier Selection Model by Using the DANP with VIKOR

    Directory of Open Access Journals (Sweden)

    Tsai Chi Kuo

    2015-02-01

    Full Text Available This study proposes a novel hybrid multiple-criteria decision-making (MCDM method to evaluate green suppliers in an electronics company. Seventeen criteria in two dimensions concerning environmental and management systems were identified under the Code of Conduct of the Electronic Industry Citizenship Coalition (EICC. Following this, the Decision-Making Trial and Evaluation Laboratory (DEMATEL used the Analytic Network Process (ANP method (known as DANP to determine both the importance of evaluation criteria in selecting suppliers and the causal relationships between them. Finally, the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR method was used to evaluate the environmental performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of an electronics company was presented to demonstrate how to select green suppliers.

  5. Hierarchical models in ecology: confidence intervals, hypothesis testing, and model selection using data cloning.

    Science.gov (United States)

    Ponciano, José Miguel; Taper, Mark L; Dennis, Brian; Lele, Subhash R

    2009-02-01

    Hierarchical statistical models are increasingly being used to describe complex ecological processes. The data cloning (DC) method is a new general technique that uses Markov chain Monte Carlo (MCMC) algorithms to compute maximum likelihood (ML) estimates along with their asymptotic variance estimates for hierarchical models. Despite its generality, the method has two inferential limitations. First, it only provides Wald-type confidence intervals, known to be inaccurate in small samples. Second, it only yields ML parameter estimates, but not the maximized likelihood values used for profile likelihood intervals, likelihood ratio hypothesis tests, and information-theoretic model selection. Here we describe how to overcome these inferential limitations with a computationally efficient method for calculating likelihood ratios via data cloning. The ability to calculate likelihood ratios allows one to do hypothesis tests, construct accurate confidence intervals and undertake information-based model selection with hierarchical models in a frequentist context. To demonstrate the use of these tools with complex ecological models, we reanalyze part of Gause's classic Paramecium data with state-space population models containing both environmental noise and sampling error. The analysis results include improved confidence intervals for parameters, a hypothesis test of laboratory replication, and a comparison of the Beverton-Holt and the Ricker growth forms based on a model selection index.

  6. A criterion for separating process calculi

    Directory of Open Access Journals (Sweden)

    Federico Banti

    2010-11-01

    Full Text Available We introduce a new criterion, replacement freeness, to discern the relative expressiveness of process calculi. Intuitively, a calculus is strongly replacement free if replacing, within an enclosing context, a process that cannot perform any visible action by an arbitrary process never inhibits the capability of the resulting process to perform a visible action. We prove that there exists no compositional and interaction sensitive encoding of a not strongly replacement free calculus into any strongly replacement free one. We then define a weaker version of replacement freeness, by only considering replacement of closed processes, and prove that, if we additionally require the encoding to preserve name independence, it is not even possible to encode a non replacement free calculus into a weakly replacement free one. As a consequence of our encodability results, we get that many calculi equipped with priority are not replacement free and hence are not encodable into mainstream calculi like CCS and pi-calculus, that instead are strongly replacement free. We also prove that variants of pi-calculus with match among names, pattern matching or polyadic synchronization are only weakly replacement free, hence they are separated both from process calculi with priority and from mainstream calculi.

  7. Model Selection in the Analysis of Photoproduction Data

    Science.gov (United States)

    Landay, Justin

    2017-01-01

    Scattering experiments provide one of the most powerful and useful tools for probing matter to better understand its fundamental properties governed by the strong interaction. As the spectroscopy of the excited states of nucleons enters a new era of precision ushered in by improved experiments at Jefferson Lab and other facilities around the world, traditional partial-wave analysis methods must be adjusted accordingly. In this poster, we present a rigorous set of statistical tools and techniques that we implemented; most notably, the LASSO method, which serves for the selection of the simplest model, allowing us to avoid over fitting. In the case of establishing the spectrum of exited baryons, it avoids overpopulation of the spectrum and thus the occurrence of false-positives. This is a prerequisite to reliably compare theories like lattice QCD or quark models to experiments. Here, we demonstrate the principle by simultaneously fitting three observables in neutral pion photo-production, such as the differential cross section, beam asymmetry and target polarization across thousands of data points. Other authors include Michael Doring, Bin Hu, and Raquel Molina.

  8. Mesoscale model to select the ideal location for new vineyard plantations in the Rioja qualified denomination of origin.

    Science.gov (United States)

    Martínez-Cámara, E; Blanco, J; Jiménez, E; Saenz-Díez, J C; Rioja, J

    2014-01-01

    La Rioja is the region where the top rated wines from Spain come from and also the origin of one of the most prestigious wines in the world. It is worldwide recognized, not only for the quality of the vine, but also for the many factors involved in the process that are controllable by the farmer, such as fertilizers, irrigation, etc. Likewise, there are other key factors, which cannot be controlled that play, however, a crucial role in the quality of the wine, such as temperature, radiation, humidity, and rainfall. This research is focused on two of these factors: temperature and irradiation. The objective of this paper is to be able to recognize these factors, so as to ensure a proper decision criterion when selecting the best location for new vineyard plantations. To achieve this objective, a mesoscale model MM5 is used, and its performance is assessed and compared using different parameters, from the grid resolution to the physical parameterization of the model. Finally, the study evaluates the impact of the different parameterizations and options for the simulation of meteorological variables particularly relevant when choosing new vineyard sites (rainfall frequency, temperature, and sun exposure).

  9. Mesoscale Model to Select the Ideal Location for New Vineyard Plantations in the Rioja Qualified Denomination of Origin

    Directory of Open Access Journals (Sweden)

    E. Martínez-Cámara

    2014-01-01

    Full Text Available La Rioja is the region where the top rated wines from Spain come from and also the origin of one of the most prestigious wines in the world. It is worldwide recognized, not only for the quality of the vine, but also for the many factors involved in the process that are controllable by the farmer, such as fertilizers, irrigation, etc. Likewise, there are other key factors, which cannot be controlled that play, however, a crucial role in the quality of the wine, such as temperature, radiation, humidity, and rainfall. This research is focused on two of these factors: temperature and irradiation. The objective of this paper is to be able to recognize these factors, so as to ensure a proper decision criterion when selecting the best location for new vineyard plantations. To achieve this objective, a mesoscale model MM5 is used, and its performance is assessed and compared using different parameters, from the grid resolution to the physical parameterization of the model. Finally, the study evaluates the impact of the different parameterizations and options for the simulation of meteorological variables particularly relevant when choosing new vineyard sites (rainfall frequency, temperature, and sun exposure.

  10. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  11. QV modal distance displacement - a criterion for contingency ranking

    Energy Technology Data Exchange (ETDEWEB)

    Rios, M.A.; Sanchez, J.L.; Zapata, C.J. [Universidad de Los Andes (Colombia). Dept. of Electrical Engineering], Emails: mrios@uniandes.edu.co, josesan@uniandes.edu.co, cjzapata@utp.edu.co

    2009-07-01

    This paper proposes a new methodology using concepts of fast decoupled load flow, modal analysis and ranking of contingencies, where the impact of each contingency is measured hourly taking into account the influence of each contingency over the mathematical model of the system, i.e. the Jacobian Matrix. This method computes the displacement of the reduced Jacobian Matrix eigenvalues used in voltage stability analysis, as a criterion of contingency ranking, considering the fact that the lowest eigenvalue in the normal operation condition is not the same lowest eigenvalue in N-1 contingency condition. It is made using all branches in the system and specific branches according to the IBPF index. The test system used is the IEEE 118 nodes. (author)

  12. A criterion and mechanism for power ramp defects

    International Nuclear Information System (INIS)

    Garlick, A.; Gravenor, J.G.

    1978-02-01

    The problem of power ramp defects in water reactor fuel pins is discussed in relation to results recently obtained from ramp experiments in the Steam Generating Heavy Water Reactor. Cladding cracks in the defected fuel pins were similar, both macro- and micro structurally, to those in unirradiated Zircaloy exposed to iodine stress-corrosion cracking (scc) conditions. Furthermore, when the measured stress levels for scc in short-term tests were taken as a criterion for ramp defects, UK fuel modelling codes were found to give a useful indication of defect probability under reactor service conditions. The likelihood of sticking between fuel and cladding is discussed and evidence presented which suggests that even at power a degree of adhesion may be expected in some fuel pins. The ramp defect mechanism is discussed in terms of fission product scc, initiation being by intergranular penetration and propagation by cleavage when suitably orientated grains are exposed to large dilatational stresses ahead of the main crack. (author)

  13. A new cavability assessment criterion for longwall top coal caving

    Energy Technology Data Exchange (ETDEWEB)

    Vakili, A.; Hebblewhite, B.K. [University of New South Wales, Sydney, NSW (Australia)

    2010-12-15

    This paper describes the main results of project aimed at developing a new cavability assessment criterion for top-coal, and improving the overall understanding of the caving mechanism in Longwall Top Coal Caving (LTCC) technology. The research methodology for this study incorporated a combination of analytical, observational and empirical engineering methods. The two major outcomes of the study were an improved understanding of the caving mechanics, and a new cavability assessment system, the Top-Coal Cavability Rating (TCCR). New conceptual models were introduced for better understanding of top coal caving mechanism. The results of the conceptual investigations suggest that six major parameters can influence the cavability of a typical coal seam, i.e. deformation modulus; vertical pre-mining stress; 3-sub-horizontal pre-mining stress; 4-seam thickness; spacing of sub-horizontal joints; and spacing of sub-vertical joints. The applicability of TCCR system was investigated by back analysing the cavability in earlier LTCC practices.

  14. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    Science.gov (United States)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  15. A simple model of group selection that cannot be analyzed with inclusive fitness

    NARCIS (Netherlands)

    van Veelen, M.; Luo, S.; Simon, B.

    2014-01-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,

  16. Jet pairing algorithm for the 6-jet Higgs channel via energy chi-square criterion

    International Nuclear Information System (INIS)

    Magallanes, J.B.; Arogancia, D.C.; Gooc, H.C.; Vicente, I.C.M.; Bacala, A.M.; Miyamoto, A.; Fujii, K.

    2002-01-01

    Study and discovery of the Higgs bosons at JLC (Joint Linear Collider) is one of the tasks of ACFA (Asian Committee for future Accelerators)-JLC Group. The mode of Higgs production at JLC is e + e - → Z 0 H 0 . In this paper, studies are concentrated on the Higgsstrahlung process and the selection of its signals by getting the right jet-pairing algorithm of 6-jet final state at 300 GeV assuming that Higgs boson mass is 120 GeV and luminosity is 500 fb -1 . The total decay width Γ (H 0 → all) and the efficiency of the signals at the JLC are studied utilizing the 6-jet channel. Out of the 91,500 Higgsstrahlung events, 4,174 6-jet events are selected. PYTHIA Monte Carlo Generator generates the 6-jet Higgsstrahlung channel according to the Standard Model. The generated events are then simulated by Quick Simulator using the JCL parameters. After tagging all 6 quarks which correspond to the 6-jet final state of the Higgsstrahlung, the mean energy of the Z, H, and W's are obtained. Having calculated these information, the event energy chi-square is defined and it is found that the correct combination have generally smaller value. This criterion can be used to find correct jet-pairing algorithm and as one of the cuts for the background signals later on. Other chi-definitions are also proposed. (S. Funahashi)

  17. Heat transfer modelling and stability analysis of selective laser melting

    International Nuclear Information System (INIS)

    Gusarov, A.V.; Yadroitsev, I.; Bertrand, Ph.; Smurov, I.

    2007-01-01

    The process of direct manufacturing by selective laser melting basically consists of laser beam scanning over a thin powder layer deposited on a dense substrate. Complete remelting of the powder in the scanned zone and its good adhesion to the substrate ensure obtaining functional parts with improved mechanical properties. Experiments with single-line scanning indicate, that an interval of scanning velocities exists where the remelted tracks are uniform. The tracks become broken if the scanning velocity is outside this interval. This is extremely undesirable and referred to as the 'balling' effect. A numerical model of coupled radiation and heat transfer is proposed to analyse the observed instability. The 'balling' effect at high scanning velocities (above ∼20 cm/s for the present conditions) can be explained by the Plateau-Rayleigh capillary instability of the melt pool. Two factors stabilize the process with decreasing the scanning velocity: reducing the length-to-width ratio of the melt pool and increasing the width of its contact with the substrate

  18. Condorcet versus participation criterion in social welfare rules

    NARCIS (Netherlands)

    Can, Burak; Ergin, Emre; Pourpouneh, Mohsen

    2017-01-01

    Moulin (1988) shows that there exists no social choice rule, that satisfies the following two criteria at the same time: the Condorcet criterion and the participation criterion, a.k.a., No Show Paradox. We extend these criteria to social welfare rules, i.e., rules that choose rankings for each

  19. Some necessary and sufficient conditions for Hypercyclicity Criterion

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    shifts in terms of their weights. But, then Montes and Leon showed that these hypercyclic operators do satisfy the criterion as well (§2 of [17] and Proposition 4.3 of [18]). Bes and. Peris proved that a continuous linear operator T on a Frechet space satisfies the Hyper- cyclicity Criterion if and only if it is hereditarily hypercyclic.

  20. Some necessary and sufficient conditions for Hypercyclicity Criterion

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    We give necessary and sufficient conditions for an operator on a separable. Hilbert space to satisfy the hypercyclicity criterion. Keywords. Strong operator topology; Hilbert–Schmidt operators; Hypercyclicity. Criterion. 1. Introduction. Suppose thatX is a separable topological vector space andT is a continuous linear mapping.

  1. A Random Strategy Criterion for Validity of Simulation Game Participation.

    Science.gov (United States)

    Dickinson, John R.; Faria, A. J.

    1997-01-01

    Proposes a new approach (random strategy criterion) for measuring the internal validity of simulation game participation that offers a more logical conceptual foundation than past research approaches. Results of classroom testing with 660 undergraduate marketing students supports the use of random strategy criterion for measuring internal…

  2. Frequency-domain criterion for the chaos synchronization of time ...

    Indian Academy of Sciences (India)

    This paper studies the global synchronization of non-autonomous, time-delay, chaotic power systems via linear state-error feedback control. The frequency domain criterion and the LMI criterion are proposed and applied to design the coupling matrix. Some algebraic criteria via a single-variable linear coupling are derived ...

  3. Modified Schur-Cohn Criterion for Stability of Delayed Systems

    Directory of Open Access Journals (Sweden)

    Juan Ignacio Mulero-Martínez

    2015-01-01

    Full Text Available A modified Schur-Cohn criterion for time-delay linear time-invariant systems is derived. The classical Schur-Cohn criterion has two main drawbacks; namely, (i the dimension of the Schur-Cohn matrix generates some round-off errors eventually resulting in a polynomial of s with erroneous coefficients and (ii imaginary roots are very hard to detect when numerical errors creep in. In contrast to the classical Schur-Cohn criterion an alternative approach is proposed in this paper which is based on the application of triangular matrices over a polynomial ring in a similar way as in the Jury test of stability for discrete systems. The advantages of the proposed approach are that it halves the dimension of the polynomial and it only requires seeking real roots, making this modified criterion comparable to the Rekasius substitution criterion.

  4. On PID Controller Design by Combining Pole Placement Technique with Symmetrical Optimum Criterion

    Directory of Open Access Journals (Sweden)

    Viorel Nicolau

    2013-01-01

    Full Text Available In this paper, aspects of analytical design of PID controllers are studied, by combining pole placement technique with symmetrical optimum criterion. The proposed method is based on low-order plant model with pure integrator, and it can be used for both fast and slow processes. Starting from the desired closed-loop transfer function, which contains a second-order oscillating system and a lead-lag compensator, it is shown that the zero value depends on the real-pole value of closed-loop transfer function. In addition, there is only one pole value, which satisfies the assumptions of symmetrical optimum criterion imposed to open-loop transfer function. In these conditions, by combining the pole placement technique with symmetrical optimum criterion, the analytical expressions of the controller parameters can be simplified. For simulations, PID autopilot design for heading control problem of a conventional ship is considered.

  5. A criterion for the recalculation of shape factors in reactor kinetics problems

    International Nuclear Information System (INIS)

    Kamelander, G.

    1983-01-01

    One of the best known methods to solve the neutron kinetics equations is the factorization method which consists in splitting the neutron flux in an amplitude factor P(t) and in a shape factor PSI(r,t). This shape factor is approximated by the solution of the stationary form of the neutron diffusion equation. If the flux shape changes significantly during the course of the transient PSI(r,t) must be recalculated. The present paper gives a qualitative criterion for the recalculation of the shape factor. The physical model of the code TRANS-II using this criterion is presented. The results of two transient calculations are given. It is shown that this criterion provides a reliable tool to optimize the factorization method. (orig.) [de

  6. A risk-based microbiological criterion that uses the relative risk as the critical limit

    DEFF Research Database (Denmark)

    Andersen, Jens Kirk; Nørrung, Birgit; da Costa Alves Machado, Simone

    2015-01-01

    A risk-based microbiological criterion is described, that is based on the relative risk associated to the analytical result of a number of samples taken from a food lot. The acceptable limit is a specific level of risk and not a specific number of microorganisms, as in other microbiological...... criteria. The approach requires the availability of a quantitative microbiological risk assessment model to get risk estimates for food products from sampled food lots. By relating these food lot risk estimates to the mean risk estimate associated to a representative baseline data set, a relative risk...... estimate can be obtained. This relative risk estimate then can be compared with a critical value, defined by the criterion. This microbiological criterion based on a relative risk limit is particularly useful when quantitative enumeration data are available and when the prevalence of the microorganism...

  7. Controllability, not chaos, key criterion for ocean state estimation

    Science.gov (United States)

    Gebbie, Geoffrey; Hsieh, Tsung-Lin

    2017-07-01

    The Lagrange multiplier method for combining observations and models (i.e., the adjoint method or 4D-VAR) has been avoided or approximated when the numerical model is highly nonlinear or chaotic. This approach has been adopted primarily due to difficulties in the initialization of low-dimensional chaotic models, where the search for optimal initial conditions by gradient-descent algorithms is hampered by multiple local minima. Although initialization is an important task for numerical weather prediction, ocean state estimation usually demands an additional task - a solution of the time-dependent surface boundary conditions that result from atmosphere-ocean interaction. Here, we apply the Lagrange multiplier method to an analogous boundary control problem, tracking the trajectory of the forced chaotic pendulum. Contrary to previous assertions, it is demonstrated that the Lagrange multiplier method can track multiple chaotic transitions through time, so long as the boundary conditions render the system controllable. Thus, the nonlinear timescale poses no limit to the time interval for successful Lagrange multiplier-based estimation. That the key criterion is controllability, not a pure measure of dynamical stability or chaos, illustrates the similarities between the Lagrange multiplier method and other state estimation methods. The results with the chaotic pendulum suggest that nonlinearity should not be a fundamental obstacle to ocean state estimation with eddy-resolving models, especially when using an improved first-guess trajectory.

  8. Modelling uncertainty due to imperfect forward model and aerosol microphysical model selection in the satellite aerosol retrieval

    Science.gov (United States)

    Määttä, Anu; Laine, Marko; Tamminen, Johanna

    2015-04-01

    This study aims to characterize the uncertainty related to the aerosol microphysical model selection and the modelling error due to approximations in the forward modelling. Many satellite aerosol retrieval algorithms rely on pre-calculated look-up tables of model parameters representing various atmospheric conditions. In the retrieval we need to choose the most appropriate aerosol microphysical models from the pre-defined set of models by fitting them to the observations. The aerosol properties, e.g. AOD, are then determined from the best models. This choice of an appropriate aerosol model composes a notable part in the AOD retrieval uncertainty. The motivation in our study was to account these two sources in the total uncertainty budget: uncertainty in selecting the most appropriate model, and uncertainty resulting from the approximations in the pre-calculated aerosol microphysical model. The systematic model error was analysed by studying the behaviour of the model residuals, i.e. the differences between modelled and observed reflectances, by statistical methods. We utilised Gaussian processes to characterize the uncertainty related to approximations in aerosol microphysics modelling due to use of look-up tables and other non-modelled systematic features in the Level 1 data. The modelling error is described by a non-diagonal covariance matrix parameterised by correlation length, which is estimated from the residuals using computational tools from spatial statistics. In addition, we utilised Bayesian model selection and model averaging methods to account the uncertainty due to aerosol model selection. By acknowledging the modelling error as a source of uncertainty in the retrieval of AOD from observed spectral reflectance, we allow the observed values to deviate from the modelled values within limits determined by both the measurement and modelling errors. This results in a more realistic uncertainty level of the retrieved AOD. The method is illustrated by both

  9. Selecting Suitable Sites for Mine Waste Dumps Using GIS ...

    African Journals Online (AJOL)

    This research used the ModelBuilder tool and several GIS spatial analyst tools to select suitable sites for mine waste dump. The weighted overlay technique was adopted by first determining the necessary criteria and constraints and subsequently developing attributes for each criterion. The criteria used were grouped into a ...

  10. Evaluation of Inequality Constrained Hypotheses Using an Akaike-Type Information Criterion

    NARCIS (Netherlands)

    Altinisik, Y.

    2018-01-01

    The Akaike information criterion (AIC) is one of the best known information criteria that can be used to evaluate hypotheses containing only equality restrictions on model parameters. The GORIC is a generalization of the AIC that can be utilized to evaluate hypotheses containing equality and/or

  11. Structure and selection in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen

    2014-01-01

    a pool of monomers, highly ordered populations with particular sequence patterns are dynamically selected out of a vast number of possible states. The interplay between the selected microscopic sequence patterns and the macroscopic cooperative structures is examined both analytically and in simulation...

  12. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  13. Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.

    Science.gov (United States)

    Schulz-Streeck, Torben; Piepho, Hans-Peter

    2010-03-31

    The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.

  14. Modelling the growth of tambaqui, Colossoma macropomum (Cuvier, 1816) in floodplain lakes: model selection and multimodel inference.

    Science.gov (United States)

    Costa, L R F; Barthem, R B; Albernaz, A L; Bittencourt, M M; Villacorta-Corrêa, M A

    2013-05-01

    The tambaqui, Colossoma macropomum, is one of the most commercially valuable Amazonian fish species, and in the floodplains of the region, they are caught in both rivers and lakes. Most growth studies on this species to date have adjusted only one growth model, the von Bertalanffy, without considering its possible uncertainties. In this study, four different models (von Bertalanffy, Logistic, Gompertz and the general model of Schnüte-Richards) were adjusted to a data set of fish caught within lakes from the middle Solimões River. These models were adjusted by non-linear equations, using the sample size of each age class as its weight. The adjustment evaluation of each model was based on the Akaike Information Criterion (AIC), the variation of AIC between the models (Δi) and the evidence weights (wi). Both the Logistic (Δi = 0.0) and Gompertz (Δi = 1.12) models were supported by the data, but neither of them was clearly superior (wi, respectively 52.44 and 29.95%). Thus, we propose the use of an averaged-model to estimate the asymptotic length (L∞). The averaged-model, based on Logistic and Gompertz models, resulted in an estimate of L∞=90.36, indicating that the tambaqui would take approximately 25 years to reach average size.

  15. Nonparametric adaptive age replacement with a one-cycle criterion

    International Nuclear Information System (INIS)

    Coolen-Schrijner, P.; Coolen, F.P.A.

    2007-01-01

    Age replacement of technical units has received much attention in the reliability literature over the last four decades. Mostly, the failure time distribution for the units is assumed to be known, and minimal costs per unit of time is used as optimality criterion, where renewal reward theory simplifies the mathematics involved but requires the assumption that the same process and replacement strategy continues over a very large ('infinite') period of time. Recently, there has been increasing attention to adaptive strategies for age replacement, taking into account the information from the process. Although renewal reward theory can still be used to provide an intuitively and mathematically attractive optimality criterion, it is more logical to use minimal costs per unit of time over a single cycle as optimality criterion for adaptive age replacement. In this paper, we first show that in the classical age replacement setting, with known failure time distribution with increasing hazard rate, the one-cycle criterion leads to earlier replacement than the renewal reward criterion. Thereafter, we present adaptive age replacement with a one-cycle criterion within the nonparametric predictive inferential framework. We study the performance of this approach via simulations, which are also used for comparisons with the use of the renewal reward criterion within the same statistical framework

  16. Employment Standards for Australian Urban Firefighters: Part 2: The Physiological Demands and the Criterion Tasks.

    Science.gov (United States)

    Taylor, Nigel A S; Fullagar, Hugh H K; Sampson, John A; Notley, Sean R; Burley, Simon D; Lee, Daniel S; Groeller, Herbert

    2015-10-01

    The physiological demands of 15 essential, physically demanding fire-fighting tasks were investigated to identify criterion tasks for bona fide recruit selection. A total of 51 operational firefighters participated in discrete, field-based occupational simulations, with physiological responses measured throughout. The most stressful tasks were identified and classified according to dominant fitness attributes and movement patterns. Three movement classes (single-sided load carriage [5 tasks], dragging loads [4 tasks], and overhead pushing and holding objects [2 tasks]) and one mandatory strength task emerged. Seven criterion tasks were identified. Load holding and carriage dominated these movement patterns, yet no task accentuated whole-body endurance. Material handling movements from each classification must appear within a physical aptitude (selection) test for it to adequately represent the breadth of tasks performed by Australian urban firefighters.

  17. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  18. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  19. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  20. An Integrated DEMATEL-QFD Model for Medical Supplier Selection

    OpenAIRE

    Mehtap Dursun; Zeynep Şener

    2014-01-01

    Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment cr...

  1. Evaluation of uncertainties in selected environmental dispersion models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-01-01

    Compliance with standards of radiation dose to the general public has necessitated the use of dispersion models to predict radionuclide concentrations in the environment due to releases from nuclear facilities. Because these models are only approximations of reality and because of inherent variations in the input parameters used in these models, their predictions are subject to uncertainty. Quantification of this uncertainty is necessary to assess the adequacy of these models for use in determining compliance with protection standards. This paper characterizes the capabilities of several dispersion models to predict accurately pollutant concentrations in environmental media. Three types of models are discussed: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations

  2. Model of Selective and Non-Selective Management of Badgers (Meles meles) to Control Bovine Tuberculosis in Badgers and Cattle.

    Science.gov (United States)

    Smith, Graham C; Delahay, Richard J; McDonald, Robbie A; Budgey, Richard

    2016-01-01

    Bovine tuberculosis (bTB) causes substantial economic losses to cattle farmers and taxpayers in the British Isles. Disease management in cattle is complicated by the role of the European badger (Meles meles) as a host of the infection. Proactive, non-selective culling of badgers can reduce the incidence of disease in cattle but may also have negative effects in the area surrounding culls that have been associated with social perturbation of badger populations. The selective removal of infected badgers would, in principle, reduce the number culled, but the effects of selective culling on social perturbation and disease outcomes are unclear. We used an established model to simulate non-selective badger culling, non-selective badger vaccination and a selective trap and vaccinate or remove (TVR) approach to badger management in two distinct areas: South West England and Northern Ireland. TVR was simulated with and without social perturbation in effect. The lower badger density in Northern Ireland caused no qualitative change in the effect of management strategies on badgers, although the absolute number of infected badgers was lower in all cases. However, probably due to differing herd density in Northern Ireland, the simulated badger management strategies caused greater variation in subsequent cattle bTB incidence. Selective culling in the model reduced the number of badgers killed by about 83% but this only led to an overall benefit for cattle TB incidence if there was no social perturbation of badgers. We conclude that the likely benefit of selective culling will be dependent on the social responses of badgers to intervention but that other population factors including badger and cattle density had little effect on the relative benefits of selective culling compared to other methods, and that this may also be the case for disease management in other wild host populations.

  3. Effects of selected operational parameters on efficacy and selectivity of electromembrane extraction. Chlorophenols as model analytes

    Czech Academy of Sciences Publication Activity Database

    Šlampová, Andrea; Kubáň, Pavel; Boček, Petr

    2014-01-01

    Roč. 35, č. 17 (2014), s. 2429-2437 ISSN 0173-0835 R&D Projects: GA ČR(CZ) GA13-05762S Institutional support: RVO:68081715 Keywords : electromembrane extraction * chlorophenols * extraction selectivity Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.028, year: 2014

  4. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  5. Criterion of damage beginning: experimental identification for laminate composite

    International Nuclear Information System (INIS)

    Thiebaud, F.; Perreux, D.; Varchon, D.; Lebras, J.

    1996-01-01

    The aim of this study is to propose a criterion of damage beginning for laminate composite. The materials is a glass-epoxy laminate [+55 deg.,-55 deg.[ n performed by winding filament process. First of all a description of the damage is performed and allows to define a damage variable. Thanks to the potential of free energy, an associated variable is defined. The damage criterion is written by using this last one. The parameter of the criterion is identified using mechanical and acoustical methods. The result is compared and exhibit a good agreement. (authors). 13 refs., 5 figs

  6. Robust Hammerstein Adaptive Filtering under Maximum Correntropy Criterion

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-10-01

    Full Text Available The maximum correntropy criterion (MCC has recently been successfully applied to adaptive filtering. Adaptive algorithms under MCC show strong robustness against large outliers. In this work, we apply the MCC criterion to develop a robust Hammerstein adaptive filter. Compared with the traditional Hammerstein adaptive filters, which are usually derived based on the well-known mean square error (MSE criterion, the proposed algorithm can achieve better convergence performance especially in the presence of impulsive non-Gaussian (e.g., α-stable noises. Additionally, some theoretical results concerning the convergence behavior are also obtained. Simulation examples are presented to confirm the superior performance of the new algorithm.

  7. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies. The... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is...

  8. Natural Selection at Work: An Accelerated Evolutionary Computing Approach to Predictive Model Selection

    Science.gov (United States)

    Akman, Olcay; Hallam, Joshua W.

    2010-01-01

    We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP) as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency. PMID:20661297

  9. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  10. Importance biasing quality criterion based on contribution response theory

    International Nuclear Information System (INIS)

    Borisov, N.M.; Panin, M.P.

    2001-01-01

    The report proposes a visual criterion of importance biasing both of forward and adjoint simulation. The similarity of contribution Monte Carlo and importance biasing random collision event distribution is proved. The conservation of total number of random trajectory crossings of surfaces, which separate the source and the detector is proposed as importance biasing quality criterion. The use of this criterion is demonstrated on the example of forward vs. adjoint importance biasing in gamma ray deep penetration problem. The larger amount of published data on forward field characteristics than on adjoint leads to the more accurate approximation of adjoint importance function in comparison to forward, for it adjoint importance simulation is more effective than forward. The proposed criterion indicates it visually, showing the most uniform distribution of random trajectory crossing events for the most effective importance biasing parameters and pointing to the direction of tuning importance biasing parameters. (orig.)

  11. Angular criterion for distinguishing between Fraunhofer and Fresnel diffraction

    International Nuclear Information System (INIS)

    Medina, Francisco F.; Garcia-Sucerquia, Jorge; Castaneda, Roman; Matteucci, Giorgio

    2003-03-01

    The distinction between Fresnel and Fraunhofer diffraction is a crucial condition for the accurate analysis of diffracting structures. In this paper we propose a criterion based on the angle subtended by the first zero of the diffraction pattern from the center of the diffracting aperture. The determination of the zero of the diffraction pattern is the crucial point for assuring the precision of the criterion. It mainly depends on the dynamical range of the detector. Therefore, the applicability of adequate thresholds for different detector types is discussed. The criterion is also generalized by expressing it in terms of the number of Fresnel zones delimited by the aperture. Simulations are reported for illustrating the feasibility of the criterion. (author)

  12. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  13. Model Selection and Risk Estimation with Applications to Nonlinear Ordinary Differential Equation Systems

    DEFF Research Database (Denmark)

    Mikkelsen, Frederik Vissing

    Broadly speaking, this thesis is devoted to model selection applied to ordinary dierential equations and risk estimation under model selection. A model selection framework was developed for modelling time course data by ordinary dierential equations. The framework is accompanied by the R software...... eective computational tools for estimating unknown structures in dynamical systems, such as gene regulatory networks, which may be used to predict downstream eects of interventions in the system. A recommended algorithm based on the computational tools is presented and thoroughly tested in various...... simulation studies and applications. The second part of the thesis also concerns model selection, but focuses on risk estimation, i.e., estimating the error of mean estimators involving model selection. An extension of Stein's unbiased risk estimate (SURE), which applies to a class of estimators with model...

  14. Model selection criteria : how to evaluate order restrictions

    NARCIS (Netherlands)

    Kuiper, R.M.

    2012-01-01

    Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might

  15. The Selection of Turbulence Models for Prediction of Room Airflow

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...

  16. Failure Criterion for Brick Masonry: A Micro-Mechanics Approach

    OpenAIRE

    Kawa Marek

    2015-01-01

    The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading ...

  17. SONOX criterion application for ecological analysis of thermopower plants operation

    International Nuclear Information System (INIS)

    Cardu, Mircea; Baica, Malvina

    2009-01-01

    In this paper the authors introduce a new criterion - SONOX - which can be used to analyze the thermopower plants (TPPs) operation impact on the environment trough noxious gas emissions (sulphur and nitrogen oxides) in the atmosphere. Based on this criterion and applying the equivalence and the compensation principles, developed by the authors in some of their previous papers, we analyze some main Romanian TPPs and some recommendations are given in order to join the European Union norms regarding the respective emissions limits

  18. Evaluation of the tensor polynomial failure criterion for composite materials

    Science.gov (United States)

    Tennyson, R. C.; Macdonald, D.; Nanyaro, A. P.

    1978-01-01

    A comprehensive experimental and analytical evaluation of the tensor polynomial failure criterion was undertaken to determine its capability for predicting the ultimate strength of laminated composite structures subject to a plane stress state. Results are presented demonstrating that a quadratic formulation is too conservative and a cubic representation is required. Strength comparisons with test data derived from glass/epoxy and graphite/epoxy tubular specimens are also provided to validate the cubic strength criterion.

  19. Mathematical models of cytotoxic effects in endpoint tumor cell line assays: critical assessment of the application of a single parametric value as a standard criterion to quantify the dose-response effects and new unexplored proposal formats.

    Science.gov (United States)

    Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R

    2017-10-23

    The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling

  20. Variable Selection in ROC Regression

    Directory of Open Access Journals (Sweden)

    Binhuan Wang

    2013-01-01

    Full Text Available Regression models are introduced into the receiver operating characteristic (ROC analysis to accommodate effects of covariates, such as genes. If many covariates are available, the variable selection issue arises. The traditional induced methodology separately models outcomes of diseased and nondiseased groups; thus, separate application of variable selections to two models will bring barriers in interpretation, due to differences in selected models. Furthermore, in the ROC regression, the accuracy of area under the curve (AUC should be the focus instead of aiming at the consistency of model selection or the good prediction performance. In this paper, we obtain one single objective function with the group SCAD to select grouped variables, which adapts to popular criteria of model selection, and propose a two-stage framework to apply the focused information criterion (FIC. Some asymptotic properties of the proposed methods are derived. Simulation studies show that the grouped variable selection is superior to separate model selections. Furthermore, the FIC improves the accuracy of the estimated AUC compared with other criteria.

  1. A Four-Step Model for Teaching Selection Interviewing Skills

    Science.gov (United States)

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  2. Modelling the negative effects of landscape fragmentation on habitat selection

    NARCIS (Netherlands)

    Langevelde, van F.

    2015-01-01

    Landscape fragmentation constrains movement of animals between habitat patches. Fragmentation may, therefore, limit the possibilities to explore and select the best habitat patches, and some animals may have to cope with low-quality patches due to these movement constraints. If so, these individuals

  3. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the

  4. RUC at TREC 2014: Select Resources Using Topic Models

    Science.gov (United States)

    2014-11-01

    preprocess the data by parsing the pages ( html , txt, doc, xls, ppt, pdf, xml files) into tokens, removing the stopwords listed in the Indri’s...Gravano. Classification-Aware Hidden- Web Text Database Selection. ACM Trans. Inf. Syst. Vol. 26, No. 2, Article 6, April 2008. [8] J. Seo and B. W

  5. The Living Dead: Transformative Experiences in Modelling Natural Selection

    Science.gov (United States)

    Petersen, Morten Rask

    2017-01-01

    This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…

  6. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  7. TABU SEARCH WITH ASPIRATION CRITERION FOR THE TIMETABLING PROBLEM

    Directory of Open Access Journals (Sweden)

    Oscar Chávez-Bosquez

    2015-01-01

    Full Text Available The aspiration criterion is an imperative element in the Tabu Search, with aspiration-by-default and the aspiration-by-objective the mainly used criteria in the literature. In this paper a new aspiration criterion is proposed in order to implement a probabilistic function when evaluating an element classified as tabu which improves the current solution, the proposal is called Tabu Search with Probabilistic Aspiration Criterion (BT- CAP. The test case used to evaluate the performance of the Probabilistic Aspiration Criterion proposed consists on the 20 instances of the problem described in the First International Timetabling Competition. The results are compared with 2 additional variants of the Tabu Search Algorithm: Tabu Search with Default Aspiration Criterion (BT-CAD and Tabu Search with Objective Aspiration Criterion (BT-CAO. Wilcoxon test was applied to the generated results, and it was proved with 99 % confidence that BT-CAP algorithm gets better solutions than the two other variants of the Tabu Search algorithm.

  8. Model selection for integrated pest management with stochasticity.

    Science.gov (United States)

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Design of Biomass Combined Heat and Power (CHP Systems based on Economic Risk using Minimax Regret Criterion

    Directory of Open Access Journals (Sweden)

    Ling Wen Choong

    2018-01-01

    Full Text Available It is a great challenge to identify optimum technologies for CHP systems that utilise biomass and convert it into heat and power. In this respect, industry decision makers are lacking in confidence to invest in biomass CHP due to economic risk from varying energy demand. This research work presents a linear programming systematic framework to design biomass CHP system based on potential loss of profit due to varying energy demand. Minimax Regret Criterion (MRC approach was used to assess maximum regret between selections of the given biomass CHP design based on energy demand. Based on this, the model determined an optimal biomass CHP design with minimum regret in economic opportunity. As Feed-in Tariff (FiT rates affects the revenue of the CHP plant, sensitivity analysis was then performed on FiT rates on the selection of biomass CHP design. Besides, design analysis on the trend of the optimum design selected by model was conducted. To demonstrate the proposed framework in this research, a case study was solved using the proposed approach. The case study focused on designing a biomass CHP system for a palm oil mill (POM due to large energy potential of oil palm biomass in Malaysia.

  10. Variable selection in multiple linear regression: The influence of ...

    African Journals Online (AJOL)

    Abstract. The influence of individual cases in a data set is studied when variable selection is applied in multiple linear regression. Two different influence measures, based on the Cp criterion and. Akaike's information criterion, are introduced. The relative change in the selection criterion when an individual case is omitted is ...

  11. Variable selection in multiple linear regression: The influence of ...

    African Journals Online (AJOL)

    The influence of individual cases in a data set is studied when variable selection is applied in multiple linear regression. Two different influence measures, based on the Cp criterion and Akaike's information criterion, are introduced. The relative change in the selection criterion when an individual case is omitted is proposed ...

  12. Computer-Assisted Criterion-Referenced Measurement.

    Science.gov (United States)

    Ferguson, Richard L.

    A model for computer-assisted branched testing was developed, implemented, and evaluated in the context of an elementary school using the system of Individually Prescribed Instruction. A computer was used to generate and present items and then score the student's constructed response. Using Wald's sequential probability ratio test, the computer…

  13. Exploratory regression analysis: a tool for selecting models and determining predictor importance.

    Science.gov (United States)

    Braun, Michael T; Oswald, Frederick L

    2011-06-01

    Linear regression analysis is one of the most important tools in a researcher's toolbox for creating and testing predictive models. Although linear regression analysis indicates how strongly a set of predictor variables, taken together, will predict a relevant criterion (i.e., the multiple R), the analysis cannot indicate which predictors are the most important. Although there is no definitive or unambiguous method for establishing predictor variable importance, there are several accepted methods. This article reviews those methods for establishing predictor importance and provides a program (in Excel) for implementing them (available for direct download at http://dl.dropbox.com/u/2480715/ERA.xlsm?dl=1) . The program investigates all 2(p) - 1 submodels and produces several indices of predictor importance. This exploratory approach to linear regression, similar to other exploratory data analysis techniques, has the potential to yield both theoretical and practical benefits.

  14. Selected Aspects of Computer Modeling of Reinforced Concrete Structures

    Directory of Open Access Journals (Sweden)

    Szczecina M.

    2016-03-01

    Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.

  15. Bronchodilatory and anti-inflammatory properties of inhaled selective phosphodiesterase inhibitors in a guinea pig model of allergic asthma

    NARCIS (Netherlands)

    Santing, R.E; de Boer, J; Rohof, A.A B; van der Zee, N.M; Zaagsma, Hans

    2001-01-01

    In a guinea pig model of allergic asthma, we investigated the effects of the selective phosphodiesterase inhibitors rolipram (phosphodiesterase 4-selective), Org 9935 (phosphodiesterase 3-selective) and Org 20241 (dual phosphodiesterase 4/phosphodiesterase 3-selective), administered by aerosol

  16. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  17. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    Science.gov (United States)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  19. Optimizing warehouse logistics operations through site selection models : Istanbul, Turkey

    OpenAIRE

    Erdemir, Ugur

    2003-01-01

    Approved for public release; distribution is unlimited This thesis makes a cost benefit analysis of relocating the outdated and earthquake damaged supply distribution center of the Turkish Navy. Given the dynamic environment surrounding the military operations, logistic sustainability requirements, rapid information technology developments, and budget-constrained Turkish DoD acquisition environment, the site selection of a supply distribution center is critical to the future operations and...

  20. River water quality model no. 1 (RWQM1): III. Biochemical submodel selection

    DEFF Research Database (Denmark)

    Vanrolleghem, P.; Borchardt, D.; Henze, Mogens

    2001-01-01

    The new River Water Quality Model no.1 introduced in the two accompanying papers by Shanahan et al. and Reichert et al. is comprehensive. Shanahan et al. introduced a six-step decision procedure to select the necessary model features for a certain application. This paper specifically addresses one...... of these steps, i.e. the selection of submodels of the comprehensive biochemical conversion model introduced in Reichert et al. Specific conditions for inclusion of one or the other conversion process or model component are introduced, as are some general rules that can support the selection. Examples...... of simplified models are presented....

  1. Default Bayes Factors for Model Selection in Regression

    Science.gov (United States)

    Rouder, Jeffrey N.; Morey, Richard D.

    2012-01-01

    In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…

  2. Predicting ethnic and racial discrimination: a meta-analysis of IAT criterion studies.

    Science.gov (United States)

    Oswald, Frederick L; Mitchell, Gregory; Blanton, Hart; Jaccard, James; Tetlock, Philip E

    2013-08-01

    This article reports a meta-analysis of studies examining the predictive validity of the Implicit Association Test (IAT) and explicit measures of bias for a wide range of criterion measures of discrimination. The meta-analysis estimates the heterogeneity of effects within and across 2 domains of intergroup bias (interracial and interethnic), 6 criterion categories (interpersonal behavior, person perception, policy preference, microbehavior, response time, and brain activity), 2 versions of the IAT (stereotype and attitude IATs), 3 strategies for measuring explicit bias (feeling thermometers, multi-item explicit measures such as the Modern Racism Scale, and ad hoc measures of intergroup attitudes and stereotypes), and 4 criterion-scoring methods (computed majority-minority difference scores, relative majority-minority ratings, minority-only ratings, and majority-only ratings). IATs were poor predictors of every criterion category other than brain activity, and the IATs performed no better than simple explicit measures. These results have important implications for the construct validity of IATs, for competing theories of prejudice and attitude-behavior relations, and for measuring and modeling prejudice and discrimination.

  3. The analysis of the capacity of the selected measures of decision-making models in companies

    OpenAIRE

    Helena Kościelniak; Beata Skowron-Grabowska; Sylwia Łęgowik-Świącik; Małgorzata Łęgowik-Małolepsza

    2015-01-01

    The paper aims at the analysis of the information capacity of selected instruments of the assessment of decision-making models in the analyzed companies. In the paper there are presented the idea and concepts of decision-making models. There have been discussed the selected instruments of the assessment of decision-making models in enterprises. In the final part of the paper there has been held the quantification of decision- making models in the investigated cement industry companies. To mee...

  4. Varying Coefficient Panel Data Model in the Presence of Endogenous Selectivity and Fixed Effects

    OpenAIRE

    Malikov, Emir; Kumbhakar, Subal C.; Sun, Yiguo

    2013-01-01

    This paper considers a flexible panel data sample selection model in which (i) the outcome equation is permitted to take a semiparametric, varying coefficient form to capture potential parameter heterogeneity in the relationship of interest, (ii) both the outcome and (parametric) selection equations contain unobserved fixed effects and (iii) selection is generalized to a polychotomous case. We propose a two-stage estimator. Given consistent parameter estimates from the selection equation obta...

  5. Justification identification criterion cellular structures state functions

    Directory of Open Access Journals (Sweden)

    Владимир Георгиевич Куликов

    2017-02-01

    Full Text Available The paper considers the possibility of presenting situations the state of cellular structures functions of the state in the form of regression equations. This allows you to create a replica of an information storage medium on the system status at a given time. The process of system transition from the initial to the final state are invited to formalize a coherent set of regression equations. The regression equations as state functions allow the verbal process of representing the states to replace the system - model. This, in turn, allows the development of parametric methods of management structure formation.

  6. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP

  7. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    Science.gov (United States)

    Gretchen H. Roffler; Michael K. Schwartz; Kristine Pilgrim; Sandra L. Talbot; George K. Sage; Layne G. Adams; Gordon Luikart

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is...

  8. Vigas de concreto reforçadas com bambu Dendrocalamus giganteus. II: modelagem e critérios de dimensionamento Concrete beams reinforced with Dendrocalamus giganteus bamboo. II: modeling and design criterions

    Directory of Open Access Journals (Sweden)

    Humberto C. Lima Júnior

    2005-12-01

    Full Text Available Este trabalho corresponde à segunda parte de uma publicação sobre o comportamento estrutural de vigas de concreto reforçadas com bambu, na qual se apresenta e discute a modelagem dessas estruturas para, em seguida, serem apresentadas sugestões e hipóteses para o dimensionamento desses elementos estruturais. Para tanto, utilizou-se um modelo computacional baseado no Método dos Elementos Finitos, ao qual foram incorporadas sub-rotinas com as leis constitutivas do bambu e do concreto. Para calibração do modelo lançou-se mão dos dados experimentais de oito vigas de concreto reforçadas com bambu. Os resultados obtidos com o modelo computacional foram comparados com os experimentais, observando-se grande concordância. Finalmente, sugerem-se critérios de dimensionamento, os quais foram aplicados em um exemplo prático.This paper corresponds to the second part of a publication concerning the structural behaviour of concrete beams reinforced with bamboo. Modelling of concrete beams reinforced with bamboo-splint are presented and discussed. In addition, some design suggestions and hypotheses are presented. To perform the study, a Finite Element Program was used and some procedures were programmed and linked to it. The program was calibrated with the experimental data of eight concrete beams reinforced with bamboo-splint, whose results presented great accuracy. Finally, some design procedures were suggested and a practical example is given.

  9. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  10. Fuzzy Multicriteria Model for Selection of Vibration Technology

    Directory of Open Access Journals (Sweden)

    María Carmen Carnero

    2016-01-01

    Full Text Available The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this decision. This research describes an objective model using Fuzzy Analytic Hierarchy Process (FAHP to make a choice of the most suitable technology among portable vibration analysers. The aim is to create an easy-to-use model for processing, manufacturing, services, and research organizations, to guarantee adequate decision-making in the choice of vibration analysis technology. The model described recognises that judgements are often based on ambiguous, imprecise, or inadequate information that cannot provide precise values. The model incorporates judgements from several decision-makers who are experts in the field of vibration analysis, maintenance, and electronic devices. The model has been applied to a Health Care Organization.

  11. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  12. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  13. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  14. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  15. Selected developments and applications of Leontief models in industrial ecology

    International Nuclear Information System (INIS)

    Stroemman, Anders Hammer

    2005-01-01

    Thesis Outline: This thesis investigates issues of environmental repercussions on processes of three spatial scales; a single process plant, a regional value chain and the global economy. The first paper investigates environmental repercussions caused by a single process plant using an open Leontief model with combined physical and monetary units in what is commonly referred to as a hybrid life cycle model. Physical capital requirements are treated as any other good. Resources and environmental stressors, thousands in total, are accounted for and assessed by aggregation using standard life cycle impact assessment methods. The second paper presents a methodology for establishing and combining input-output matrices and life-cycle inventories in a hybrid life cycle inventory. Information contained within different requirements matrices are combined and issues of double counting that arise are addressed and methods for eliminating these are developed and presented. The third paper is an extension of the first paper. Here the system analyzed is increased from a single plant and component in the production network to a series of nodes, constituting a value chain. The hybrid framework proposed in paper two is applied to analyze the use of natural gas, methanol and hydrogen as transportation fuels. The fourth paper presents the development of a World Trade Model with Bilateral Trade, an extension of the World Trade Model (Duchin, 2005). The model is based on comparative advantage and is formulated as a linear program. It endogenously determines the regional output of sectors and bilateral trade flows between regions. The model may be considered a Leontief substitution model where substitution of production is allowed between regions. The primal objective of the model requires the minimization of global factor costs. The fifth paper demonstrates how the World Trade Model with Bilateral Trade can be applied to address questions relevant for industrial ecology. The model is

  16. Selection of References in Wind Turbine Model Predictive Control Design

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Hovgaard, Tobias

    2015-01-01

    a model predictive controller for a wind turbine. One of the important aspects for a tracking control problem is how to setup the optimal reference tracking problem, as it might be relevant to track, e.g., the three concurrent references: optimal pitch angle, optimal rotational speed, and optimal power....... The importance if the individual references differ depending in particular on the wind speed. In this paper we investigate the performance of a reference tracking model predictive controller with two different setups of the used optimal reference signals. The controllers are evaluated using an industrial high...

  17. Direct numerical simulations of non-premixed ethylene-air flames: Local flame extinction criterion

    KAUST Repository

    Lecoustre, Vivien R.

    2014-11-01

    Direct Numerical Simulations (DNS) of ethylene/air diffusion flame extinctions in decaying two-dimensional turbulence were performed. A Damköhler-number-based flame extinction criterion as provided by classical large activation energy asymptotic (AEA) theory is assessed for its validity in predicting flame extinction and compared to one based on Chemical Explosive Mode Analysis (CEMA) of the detailed chemistry. The DNS code solves compressible flow conservation equations using high order finite difference and explicit time integration schemes. The ethylene/air chemistry is simulated with a reduced mechanism that is generated based on the directed relation graph (DRG) based methods along with stiffness removal. The numerical configuration is an ethylene fuel strip embedded in ambient air and exposed to a prescribed decaying turbulent flow field. The emphasis of this study is on the several flame extinction events observed in contrived parametric simulations. A modified viscosity and changing pressure (MVCP) scheme was adopted in order to artificially manipulate the probability of flame extinction. Using MVCP, pressure was changed from the baseline case of 1 atm to 0.1 and 10 atm. In the high pressure MVCP case, the simulated flame is extinction-free, whereas in the low pressure MVCP case, the simulated flame features frequent extinction events and is close to global extinction. Results show that, despite its relative simplicity and provided that the global flame activation temperature is correctly calibrated, the AEA-based flame extinction criterion can accurately predict the simulated flame extinction events. It is also found that the AEA-based criterion provides predictions of flame extinction that are consistent with those provided by a CEMA-based criterion. This study supports the validity of a simple Damköhler-number-based criterion to predict flame extinction in engineering-level CFD models. © 2014 The Combustion Institute.

  18. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  19. Selection of antioxidants against ovarian oxidative stress in mouse model.

    Science.gov (United States)

    Li, Bojiang; Weng, Qiannan; Liu, Zequn; Shen, Ming; Zhang, Jiaqing; Wu, Wangjun; Liu, Honglin

    2017-12-01

    Oxidative stress (OS) plays an important role in the process of ovarian granulosa cell apoptosis and follicular atresia. The aim of this study was to select antioxidant against OS in ovary tissue. Firstly, we chose the six antioxidants and analyzed the reactive oxygen species (ROS) level in the ovary tissue. The results showed that proanthocyanidins, gallic acid, curcumin, and carotene decrease the ROS level compared with control group. We further demonstrated that both proanthocyanidins and gallic acid increase the antioxidant enzymes activity. Moreover, change in the ROS level was not observed in proanthocyanidins and gallic acid group of brain, liver, spleen, and kidney tissues. Finally, we found that proanthocyanidins and gallic acid inhibit pro-apoptotic genes expression in granulosa cells. Taken together, proanthocyanidins and gallic acid may be the most acceptable and optimal antioxidants specifically against ovarian OS and also may be involved in the inhibition of granulosa cells apoptosis in mouse ovary. © 2017 Wiley Periodicals, Inc.

  20. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  1. Broken selection rule in the quantum Rabi model

    NARCIS (Netherlands)

    Forn Diaz, P.; Gonzalez-Romero, E; Harmans, C.J.P.M.; Solano, E; Mooij, J.E.

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling

  2. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  3. The Applicability of Selected Evaluation Models to Evolving Investigative Designs.

    Science.gov (United States)

    Smith, Nick L.; Hauer, Diane M.

    1990-01-01

    Ten evaluation models are examined in terms of their applicability to investigative, emergent design programs: Stake's portrayal, Wolf's adversary, Patton's utilization, Guba's investigative journalism, Scriven's goal-free, Scriven's modus operandi, Eisner's connoisseurial, Stufflebeam's CIPP, Tyler's objective based, and Levin's cost…

  4. Modeling Selected Climatic Variables in Ibadan, Oyo State, Nigeria ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-09-01

    Sep 1, 2013 ... The aim of this study was fitting the modified generalized burr density function to total rainfall and temperature data obtained from the meteorological unit in the Department of. Environmental Modelling and Management of the Forestry Research Institute of Nigeria. (FRIN) in Ibadan, Oyo State, Nigeria.

  5. Model Selection for Nondestructive Quantification of Fruit Growth in Pepper

    NARCIS (Netherlands)

    Wubs, A.M.; Ma, Y.T.; Heuvelink, E.; Hemerik, L.; Marcelis, L.F.M.

    2012-01-01

    Quantifying fruit growth can be desirable for several purposes (e.g., prediction of fruit yield and size, or for the use in crop simulation models). The goal of this article was to determine the best sigmoid function to describe fruit growth of pepper (Capsicum annuum) from nondestructive fruit

  6. The Optimal Portfolio Selection Model under g-Expectation

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.

  7. Selecting Tools to Model Integer and Binomial Multiplication

    Science.gov (United States)

    Pratt, Sarah Smitherman; Eddy, Colleen M.

    2017-01-01

    Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…

  8. An individual-level selection model for the apparent altruism ...

    Indian Academy of Sciences (India)

    Amotz Zahavi

    2018-02-16

    Feb 16, 2018 ... remain solitary when the rest have completed aggregation. Their response to starvation (apparently) is not to become part of an aggregate, but instead to take a chance on a fresh source of food appearing quickly. Modelling shows that given the right environmental conditions, this can work. (Tarnita et al.

  9. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...... evolving fields like additive manufacturing....

  10. Selected Constitutive Models for Simulating the Hygromechanical Response of Wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    The present thesis is a compilation of papers. Three of the papers, I , VI and VII, are published in this thesis only, i.e., an introductory paper and two so-called discussion papers. The papers II, III and V have been published in the international journal, Holzforschung. Paper IV is a conferenc...... paper presented at the 19th Nordic Seminar on Computational Mechanics, Lund, Sweden, 2006. Paper I: The theories for the phenomena leading to hygromechanical response of wood relate to the orthotropic cellular structure and the hydrophilic and hydrophobic polymers constituting the cells...... of wood as a state in the sorption hysteresis space, which is independent of the condition of water vapor in the lumens. Two approaches are developed and tested by implementation into commercial software. Paper VI: The temperature dependencies of the hysteretic multi-Fickian moisture transport model...... are discussed. The constitutive moisture transport models are coupled with a heat transport model yielding terms that describe the so-called Dufour and Sorret effects, however, with multiple phases and hysteresis included. Paper VII: In this paper the modeling of transverse couplings in creep of wood...

  11. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  12. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  13. Aggressive Attitudes in Middle Schools: A Factor Structure and Criterion-Related Validity Study.

    Science.gov (United States)

    Huang, Francis L; Cornell, Dewey G; Konold, Timothy R

    2015-08-01

    Student attitudes toward aggression have been linked to individual aggressive behavior, but the relationship between school-wide normative beliefs about aggression and aggressive behavior poses some important measurement challenges that have not been adequately examined. The current study investigated the factor structure, measurement invariance, and criterion-related validity of a six-item Aggressive Attitudes scale using a large sample of seventh- and eighth-grade students (n = 39,364) from 423 schools. Analytic procedures accounted for the frequently ignored modeling problems of clustered and ordinal data to provide more reliable and accurate model estimates and standard errors. The resulting second-order factor structure of the Aggressive Attitudes scale demonstrated measurement invariance across gender, grade, and race/ethnicity groups. Criterion-related validity was supported with eight student- and school-level indices of aggressive behavior. © The Author(s) 2014.

  14. Sufficient criterion for guaranteeing that a two-qubit state is unsteerable

    Science.gov (United States)

    Bowles, Joseph; Hirsch, Flavien; Quintino, Marco Túlio; Brunner, Nicolas

    2016-02-01

    Quantum steering can be detected via the violation of steering inequalities, which provide sufficient conditions for the steerability of quantum states. Here we discuss the converse problem, namely, ensuring that an entangled state is unsteerable and hence Bell local. We present a simple criterion, applicable to any two-qubit state, that guarantees that the state admits a local hidden state model for arbitrary projective measurements. Specifically, we construct local hidden state models for a large class of entangled states, which thus cannot violate any steering or Bell inequality. In turn, this leads to sufficient conditions for a state to be only one-way steerable and provides the simplest possible example of one-way steering. Finally, by exploiting the connection between steering and measurement incompatibility, we give a sufficient criterion for a continuous set of qubit measurements to be jointly measurable.

  15. Model-supported selection of distribution coefficients for performance assessment

    International Nuclear Information System (INIS)

    Ochs, M.; Lothenbach, B.; Shibata, Hirokazu; Yui, Mikazu

    1999-01-01

    A thermodynamic speciation/sorption model is used to illustrate typical problems encountered in the extrapolation of batch-type K d values to repository conditions. For different bentonite-groundwater systems, the composition of the corresponding equilibrium solutions and the surface speciation of the bentonite is calculated by treating simultaneously solution equilibria of soluble components of the bentonite as well as ion exchange and acid/base reactions at the bentonite surface. K d values for Cs, Ra, and Ni are calculated by implementing the appropriate ion exchange and surface complexation equilibria in the bentonite model. Based on this approach, hypothetical batch experiments are contrasted with expected conditions in compacted backfill. For each of these scenarios, the variation of K d values as a function of groundwater composition is illustrated for Cs, Ra, and Ni. The applicability of measured, batch-type K d values to repository conditions is discussed. (author)

  16. Selected topics in phenomenology of the standard model

    International Nuclear Information System (INIS)

    Roberts, R.G.

    1992-01-01

    We begin with the structure of the proton which is revealed through deep inelastic scattering of nucleons by electron/muon or neutrino scattering off nucleons. The quark parton model is described which leads on to the interaction of quarks and gluons - quantum chromodynamics (QCD). From this parton distributions can be extracted and then fed into the quark parton description of hadron-hadron collisions. In this way we analyse large p T jet production, prompt photon production and dilepton, W and Z production (Drell-Yan mechanism), ending with a study of heavy quark production. W and Z physics is then discussed. The various definitions at the tree level of sin 2 θ w are listed and then the radiative corrections to these are briefly considered. The data from European Large Electron-Positron storage rings (LEP) then allow limits to be set on the mass of the top quark and the Higgs via these corrections. Standard model predictions for the various Z widths are compared with the latest LEP data. Electroweak effects in e + e - scattering are discussed together with the extraction of the various vector and axial-vector couplings involved. We return to QCD when the production of jets in e + e - is studied. Both the LEP and lower energy data are able to give quantitative estimates of the strong coupling α s and the consistency of the various estimates and those from other QCD processes are discussed. The value of α s (M z ) actually plays an important role in setting the scale of the possible supersymmetry (SUSY) physics beyond the standard model. Finally the subject of quark mixing is addressed. How the the values of the various CKM matrix elements are derived is discussed together with a very brief look at the charge-parity (CP) violation and how the standard model is standing up to the latest measurements of ε'/ε. (Author)

  17. Barbie selected for QM1 as role models change

    OpenAIRE

    Eitelberg, Mark J.

    1991-01-01

    An article discussing the changing role models and attitudes of younf women as reflected in the introduction of Army Barbie, Air Force Barbie and Navy Barbie dolls for children. The author's commentary discusses the differences in each service's approach to the dolls, and their importance as part of the culture, as much an American institution as a toy. The author notes that the manufacturer's willingness to accept the attitude that the military is an acceptable career choice for young women...

  18. A New Approach to Model Verification, Falsification and Selection

    Directory of Open Access Journals (Sweden)

    Andrew J. Buck

    2015-06-01

    Full Text Available This paper shows that a qualitative analysis, i.e., an assessment of the consistency of a hypothesized sign pattern for structural arrays with the sign pattern of the estimated reduced form, can always provide decisive insight into a model’s validity both in general and compared to other models. Qualitative analysis can show that it is impossible for some models to have generated the data used to estimate the reduced form, even though standard specification tests might show the model to be adequate. A partially specified structural hypothesis can be falsified by estimating as few as one reduced form equation. Zero restrictions in the structure can themselves be falsified. It is further shown how the information content of the hypothesized structural sign patterns can be measured using a commonly applied concept of statistical entropy. The lower the hypothesized structural sign pattern’s entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. Three cases illustrate the approach taken here.

  19. Performance Optimization of Generalized Irreversible Refrigerator Based on a New Ecological Criterion

    OpenAIRE

    Xu, Jie; Pang, Liping; Wang, Jun

    2013-01-01

    On the basis of the exergy analysis, a performance optimization is carried out for a generalized irreversible refrigerator model, which takes into account the heat resistance, heat leakage and internal irreversibility losses. A new ecological criterion, named coefficient of performance of exergy (COPE), defined as the dimensionless ratio of the exergy output rate to the exergy loss rate, is proposed as an objective function in this paper. The optimal performance factors which maximize the eco...

  20. PTSD and Sexual Orientation: An Examination of Criterion A1 and Non-Criterion A1 Events.

    Science.gov (United States)

    Alessi, Edward J; Meyer, Ilan H; Martin, James I

    2013-03-01

    This large-scale cross-sectional study compared posttraumatic stress disorder (PTSD) prevalence among White, Black, and Latino lesbian, gay and bisexual individuals (LGBs; n = 382) and compared them with heterosexual individuals ( n = 126). Building on previous research, we relaxed the criteria of the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV ; American Psychiatric Association, 1994), allowing non-Criterion A1 events such as ending a relationship, unemployment, homelessness, and separation from parents to qualify, and we assessed differences in PTSD prevalence between standard DSM-IV criteria and the relaxed criteria. Findings revealed that participants reporting a non-Criterion A1 event were more likely than those reporting a Criterion A1 event to have symptoms diagnosable as PTSD. There was no significant difference in either DSM-IV or relaxed Criterion A1 PTSD prevalence between lesbian and gay, and heterosexual individuals or between bisexual and heterosexual individuals. Compared with White LGBs, Black and Latino LGBs had higher prevalence of PTSD with the relaxed Criterion A1 definition, but this was statistically significant only for Latinos.