WorldWideScience

Sample records for model selection criterion

  1. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  2. Selecting Items for Criterion-Referenced Tests.

    Science.gov (United States)

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  3. A criterion for selecting renewable energy processes

    International Nuclear Information System (INIS)

    Searcy, Erin; Flynn, Peter C.

    2010-01-01

    We propose that minimum incremental cost per unit of greenhouse gas (GHG) reduction, in essence the carbon credit required to economically sustain a renewable energy plant, is the most appropriate social criterion for choosing from a myriad of alternatives. The application of this criterion is illustrated for four processing alternatives for straw/corn stover: production of power by direct combustion and biomass integrated gasification and combined cycle (BIGCC), and production of transportation fuel via lignocellulosic ethanol and Fischer Tropsch (FT) syndiesel. Ethanol requires a lower carbon credit than FT, and direct combustion a lower credit than BIGCC. For comparing processes that make a different form of end use energy, in this study ethanol vs. electrical power via direct combustion, the lowest carbon credit depends on the relative values of the two energy forms. When power is 70$ MW h -1 , ethanol production has a lower required carbon credit at oil prices greater than 600$ t -1 (80$ bbl -1 ). (author)

  4. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  5. Integral criterion for selecting nonlinear crystals for frequency conversion

    International Nuclear Information System (INIS)

    Grechin, Sergei G

    2009-01-01

    An integral criterion, which takes into account all parameters determining the conversion efficiency, is offered for selecting nonlinear crystals for frequency conversion. The angular phase-matching width is shown to be related to the beam walk-off angle. (nonlinear optical phenomena)

  6. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  7. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  8. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  9. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  10. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  11. Empirical Comparison of Criterion Referenced Measurement Models

    Science.gov (United States)

    1976-10-01

    rument cons isting of a l a r~c nu mb r of items. The models ~o·ould the n be used to es t imate the tna.• s t""~r c us in~ a smaller and mor r ea lis...ti number o f items. This. rrrun·h is em- piri ca l a nd more dir\\’c tly o ri e nted to pr ti ca l app li ·a i on:; \\ viH ’ r t. tes ting time a nd the

  12. An extended geometric criterion for chaos in the Dicke model

    International Nuclear Information System (INIS)

    Li Jiangdan; Zhang Suying

    2010-01-01

    We extend HBLSL's (Horwitz, Ben Zion, Lewkowicz, Schiffer and Levitan) new Riemannian geometric criterion for chaotic motion to Hamiltonian systems of weak coupling of potential and momenta by defining the 'mean unstable ratio'. We discuss the Dicke model of an unstable Hamiltonian system in detail and show that our results are in good agreement with that of the computation of Lyapunov characteristic exponents.

  13. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  14. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  15. Interface Pattern Selection Criterion for Cellular Structures in Directional Solidification

    Science.gov (United States)

    Trivedi, R.; Tewari, S. N.; Kurtze, D.

    1999-01-01

    The aim of this investigation is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. We shall first address scientific concepts that are crucial in the selection of interface patterns. Next, the results of ground-based experimental studies in the Al-4.0 wt % Cu system will be described. Both experimental studies and theoretical calculations will be presented to establish the need for microgravity experiments.

  16. Exclusion as a Criterion for Selecting Socially Vulnerable Population Groups

    Directory of Open Access Journals (Sweden)

    Aleksandra Anatol’evna Shabunova

    2016-05-01

    Full Text Available The article considers theoretical aspects of a scientific research “The Mechanisms for Overcoming Mental Barriers of Inclusion of Socially Vulnerable Categories of the Population for the Purpose of Intensifying Modernization in the Regional Community” (RSF grant No. 16-18-00078. The authors analyze the essence of the category of “socially vulnerable groups” from the legal, economic and sociological perspectives. The paper shows that the economic approach that uses the criterion “the level of income and accumulated assets” when defining vulnerable population groups prevails in public administration practice. The legal field of the category based on the economic approach is defined by the concept of “the poor and socially unprotected categories of citizens”. With the help of the analysis of theoretical and methodological aspects of this issue, the authors show that these criteria are a necessary but not sufficient condition for classifying the population as being socially vulnerable. Foreign literature associates the phenomenon of vulnerability with the concept of risks, with the possibility of households responding to them and with the likelihood of losing the well-being (poverty theory; research areas related to the means of subsistence, etc.. The asset-based approaches relate vulnerability to the poverty that arises due to lack of access to tangible and intangible assets. Sociological theories presented by the concept of social exclusion pay much attention to the breakdown of social ties as a source of vulnerability. The essence of social exclusion consists in the inability of people to participate in important aspects of social life (in politics, labor markets, education and healthcare, cultural life, etc. though they have all the rights to do so. The difference between the concepts of exclusion and poverty is manifested in the displacement of emphasis from income inequality to limited access to rights. Social exclusion is

  17. An Empirical Model Building Criterion Based on Prediction with Applications in Parametric Cost Estimation.

    Science.gov (United States)

    1980-08-01

    varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of

  18. Sperm head's birefringence: a new criterion for sperm selection.

    Science.gov (United States)

    Gianaroli, Luca; Magli, M Cristina; Collodel, Giulia; Moretti, Elena; Ferraretti, Anna P; Baccetti, Baccio

    2008-07-01

    To investigate the characteristics of birefringence in human sperm heads and apply polarization microscopy for sperm selection at intracytoplasmic sperm injection (ICSI). Prospective randomized study. Reproductive Medicine Unit, Società Italiana Studi Medicina della Riproduzione, Bologna, Italy. A total of 112 male patients had birefringent sperm selected for ICSI (study group). The clinical outcome was compared with that obtained in 119 couples who underwent a conventional ICSI cycle (control group). The proportion of birefringent spermatozoa was evaluated before and after treatment in relation to the sperm sample quality. Embryo development and clinical outcome in the study group were compared with those in the controls. Proportion of birefringent sperm heads, rates of fertilization, cleavage, pregnancy, implantation, and ongoing implantation. The proportion of birefringent spermatozoa was significantly higher in normospermic samples when compared with oligoasthenoteratospermic samples with no progressive motility and testicular sperm extraction samples. Although fertilization and cleavage rates did not differ between the study and control groups, in the most severe male factor condition (oligoasthenoteratospermic with no progressive motility and testicular sperm extraction), the rates of clinical pregnancy, ongoing pregnancy, and implantation were significantly higher in the study group versus the controls. The analysis of birefringence in the sperm head could represent both a diagnostic tool and a novel method for sperm selection.

  19. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  20. Selection of criterions of fuels incineration on heat power plants

    International Nuclear Information System (INIS)

    Bubnov, V.P.; Minchenko, E.M.; Zelenukho, E.V.

    2006-01-01

    Fuel and energy complex takes first place in industry field of cities and defines in many respects environmental situation of cities. The products of combustion of fuel bring the greatest contribution in environmental contamination. This factor is ignored during calculation of technical and economics indexes. Ecological impact of heat power plants on the environment is determined separately from assessment of ecological damage. Determination of optimal conditions of functioning of heat power plants incineration with respect to technical, economics and ecological indexes with use of multicriterion mathematics model is proposed. (authors)

  1. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  2. 76 FR 15961 - Funding Priorities and Selection Criterion; Disability and Rehabilitation Research Projects and...

    Science.gov (United States)

    2011-03-22

    ... priorities and a selection criterion for the Disability and Rehabilitation Research Projects and Centers... outcomes for underserved populations; (4) identify research gaps; (5) identify mechanisms of integrating research and practice; and (6) disseminate findings. This notice proposes two priorities and a selection...

  3. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Accuracy of a selection criterion for glass forming ability in the Ni–Nb–Zr system

    International Nuclear Information System (INIS)

    Déo, L.P.; Oliveira, M.F. de

    2014-01-01

    Highlights: • We applied a selection in the Ni–Nb–Zr system to find alloys with high GFA. • We used the thermal parameter γ m to evaluate the GFA of alloys. • The correlation between the γ m parameter and R c in the studied system is poor. • The effect of oxygen impurity reduced dramatically the GFA of alloys. • Unknown intermetallic compounds reduced the accuracy of the criterion. - Abstract: Several theories have been developed and applied in metallic systems in order to find the best stoichiometries with high glass forming ability; however there is no universal theory to predict the glass forming ability in metallic systems. Recently a selection criterion was applied in the Zr–Ni–Cu system and it was found some correlation between experimental and theoretical data. This criterion correlates critical cooling rate for glass formation with topological instability of stable crystalline structures; average work function difference and average electron density difference among the constituent elements of the alloy. In the present work, this criterion was applied in the Ni–Nb–Zr system. It was investigated the influence of factors not considered in the calculation and on the accuracy of the criterion, such as unknown intermetallic compounds and oxygen contamination. Bulk amorphous specimens were produced by injection casting. The amorphous nature was analyzed by X-ray diffraction and differential scanning calorimetry; oxygen contamination was quantified by the inert gas fusion method

  5. A new diagnostic accuracy measure and cut-point selection criterion.

    Science.gov (United States)

    Dong, Tuochuan; Attwood, Kristopher; Hutson, Alan; Liu, Song; Tian, Lili

    2017-12-01

    Most diagnostic accuracy measures and criteria for selecting optimal cut-points are only applicable to diseases with binary or three stages. Currently, there exist two diagnostic measures for diseases with general k stages: the hypervolume under the manifold and the generalized Youden index. While hypervolume under the manifold cannot be used for cut-points selection, generalized Youden index is only defined upon correct classification rates. This paper proposes a new measure named maximum absolute determinant for diseases with k stages ([Formula: see text]). This comprehensive new measure utilizes all the available classification information and serves as a cut-points selection criterion as well. Both the geometric and probabilistic interpretations for the new measure are examined. Power and simulation studies are carried out to investigate its performance as a measure of diagnostic accuracy as well as cut-points selection criterion. A real data set from Alzheimer's Disease Neuroimaging Initiative is analyzed using the proposed maximum absolute determinant.

  6. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  7. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  8. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    Science.gov (United States)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  9. A graphical criterion for working fluid selection and thermodynamic system comparison in waste heat recovery

    International Nuclear Information System (INIS)

    Xi, Huan; Li, Ming-Jia; He, Ya-Ling; Tao, Wen-Quan

    2015-01-01

    In the present study, we proposed a graphical criterion called CE diagram by achieving the Pareto optimal solutions of the annual cash flow and exergy efficiency. This new graphical criterion enables both working fluid selection and thermodynamic system comparison for waste heat recovery. It's better than the existing criterion based on single objective optimization because it is graphical and intuitionistic in the form of diagram. The features of CE diagram were illustrated by studying 5 examples with different heat-source temperatures (ranging between 100 °C to 260 °C), 26 chlorine-free working fluids and two typical ORC systems including basic organic Rankine cycle(BORC) and recuperative organic Rankine cycle (RORC). It is found that the proposed graphical criterion is feasible and can be applied to any closed loop waste heat recovery thermodynamic systems and working fluids. - Highlights: • A graphical method for ORC system comparison/working fluid selection was proposed. • Multi-objectives genetic algorithm (MOGA) was applied for optimizing ORC systems. • Application cases were performed to demonstrate the usage of the proposed method.

  10. Drinking Water Quality Criterion - Based site Selection of Aquifer Storage and Recovery Scheme in Chou-Shui River Alluvial Fan

    Science.gov (United States)

    Huang, H. E.; Liang, C. P.; Jang, C. S.; Chen, J. S.

    2015-12-01

    Land subsidence due to groundwater exploitation is an urgent environmental problem in Choushui river alluvial fan in Taiwan. Aquifer storage and recovery (ASR), where excess surface water is injected into subsurface aquifers for later recovery, is one promising strategy for managing surplus water and may overcome water shortages. The performance of an ASR scheme is generally evaluated in terms of recovery efficiency, which is defined as percentage of water injected in to a system in an ASR site that fulfills the targeted water quality criterion. Site selection of an ASR scheme typically faces great challenges, due to the spatial variability of groundwater quality and hydrogeological condition. This study proposes a novel method for the ASR site selection based on drinking quality criterion. Simplified groundwater flow and contaminant transport model spatial distributions of the recovery efficiency with the help of the groundwater quality, hydrological condition, ASR operation. The results of this study may provide government administrator for establishing reliable ASR scheme.

  11. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    Science.gov (United States)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  12. Decision models for use with criterion-referenced tests

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1980-01-01

    The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive

  13. An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion

    Science.gov (United States)

    Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo

    2018-05-01

    The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.

  14. Failure criterion effect on solid production prediction and selection of completion solution

    Directory of Open Access Journals (Sweden)

    Dariush Javani

    2017-12-01

    Full Text Available Production of fines together with reservoir fluid is called solid production. It varies from a few grams or less per ton of reservoir fluid posing only minor problems, to catastrophic amount possibly leading to erosion and complete filling of the borehole. This paper assesses solid production potential in a carbonate gas reservoir located in the south of Iran. Petrophysical logs obtained from the vertical well were employed to construct mechanical earth model. Then, two failure criteria, i.e. Mohr–Coulomb and Mogi–Coulomb, were used to investigate the potential of solid production of the well in the initial and depleted conditions of the reservoir. Using these two criteria, we estimated critical collapse pressure and compared them to the reservoir pressure. Solid production occurs if collapse pressure is greater than pore pressure. Results indicate that the two failure criteria show different estimations of solid production potential of the studied reservoir. Mohr–Coulomb failure criterion estimated solid production in both initial and depleted conditions, where Mogi–Coulomb criterion predicted no solid production in the initial condition of reservoir. Based on Mogi–Coulomb criterion, the well may not require completion solutions like perforated liner, until at least 60% of reservoir pressure was depleted which leads to decrease in operation cost and time.

  15. Omega-optimized portfolios: applying stochastic dominance criterion for the selection of the threshold return

    Directory of Open Access Journals (Sweden)

    Renaldas Vilkancas

    2016-05-01

    Full Text Available Purpose of the article: While using asymmetric risk-return measures an important role is played by selection of the investor‘s required or threshold rate of return. The scientific literature usually states that every investor should define this rate according to their degree of risk aversion. In this paper, it is attempted to look at the problem from a different perspective – empirical research is aimed at determining the influence of the threshold rate of return on the portfolio characteristics. Methodology/methods: In order to determine the threshold rate of return a stochastic dominance criterion was used. The results are verified using the commonly applied method of backtesting. Scientific aim: The aim of this paper is to propose a method allowing selecting the threshold rate of return reliably and objectively. Findings: Empirical research confirms that stochastic dominance criteria can be successfully applied to determine the rate of return preferred by the investor. Conclusions: A risk-free investment rate or simply a zero rate of return commonly used in practice is often justified neither by theoretical nor empirical studies. This work suggests determining the threshold rate of return by applying the stochastic dominance criterion

  16. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs.

    Science.gov (United States)

    Pérez, Ana G; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl

    2018-01-01

    Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties ("Picual" and "Arbequina") used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the

  17. Using the Predictability Criterion for Selecting Extended Verbs for Shona Dictionaries

    Directory of Open Access Journals (Sweden)

    Emmanuel Chabata

    2012-09-01

    Full Text Available

    The paper examines the "predictability criterion", a classificatory tool which is used in selecting affixed word forms for dictionary entries. It focuses on the criterion as it has been used by the African Languages Lexical (ALLEX Project for selecting extended verbs to enter as headwords in the Project's first monolingual Shona dictionary Duramazwi ReChiShona. The article also examines the status of Shona verbal extensions in terms of their semantic input to the verb stems they are attached to. The paper was originally motivated by two observations: (a that predictability seems to be a matter of degree; and (b that the predictability criterion tended to be used inconsistently in the selection of extended verbs and senses for Duramazwi ReChiShona. An analysis of 412 productively extended verbs that were entered as headwords in Duramazwi ReChiShona shows that verbal extensions can bring both predictable and unpredictable senses to the verb stems they are attached to. The paper demonstrates that for an effective use of the predictability criterion for selecting extended verbs for Shona dictionaries, there is need for the lexicographer to have an in-depth understanding of the kinds of semantic movements that are caused when verb stems are extended. It shows the need to view verbal extensions in Shona as derivational morphemes, not inflectional morphemes as some earlier scholars have concluded.

     

     

    Die gebruik van die voorspelbaarheidskriterium om uitgebreide werkwoorde te selekteer vir Shonawoordeboeke

    Hierdie artikel ondersoek die "voorspelbaarheidskriterium", 'n klassifikasiehulpmiddel wat gebruik word om geaffigeerde woordvorme te selekteer as woordeboekinskrywings. Dit fokus op die kriterium soos dit gebruik is deur die African Language Lexical (ALLEX Project vir die selektering van uitgebreide werkwoorde as lemmas in die Projek se eerste eentalige Shonawoordeboek Duramazwi ReChiShona. In

  18. A Novel Non-Invasive Selection Criterion for the Preservation of Primitive Dutch Konik Horses.

    Science.gov (United States)

    May-Davis, Sharon; Brown, Wendy Y; Shorter, Kathleen; Vermeulen, Zefanja; Butler, Raquel; Koekkoek, Marianne

    2018-02-01

    The Dutch Konik is valued from a genetic conservation perspective and also for its role in preservation of natural landscapes. The primary management objective for the captive breeding of this primitive horse is to maintain its genetic purity, whilst also maintaining the nature reserves on which they graze. Breeding selection has traditionally been based on phenotypic characteristics consistent with the breed description, and the selection of animals for removal from the breeding program is problematic at times due to high uniformity within the breed, particularly in height at the wither, colour (mouse to grey dun) and presence of primitive markings. With the objective of identifying an additional non-invasive selection criterion with potential uniqueness to the Dutch Konik, this study investigates the anatomic parameters of the distal equine limb, with a specific focus on the relative lengths of the individual splint bones. Post-mortem dissections performed on distal limbs of Dutch Konik ( n = 47) and modern domesticated horses ( n = 120) revealed significant differences in relation to the length and symmetry of the 2nd and 4th Metacarpals and Metatarsals. Distal limb characteristics with apparent uniqueness to the Dutch Konik are described which could be an important tool in the selection and preservation of the breed.

  19. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  20. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs

    Directory of Open Access Journals (Sweden)

    Ana G. Pérez

    2018-02-01

    Full Text Available Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties (“Picual” and “Arbequina” used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the

  1. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    Science.gov (United States)

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  2. DESTRUCTION CRITERION IN MODEL OF NON-LINEAR ELASTIC PLASTIC MEDIUM

    Directory of Open Access Journals (Sweden)

    O. L. Shved

    2014-01-01

    Full Text Available The paper considers a destruction criterion in a specific phenomenological model of elastic plastic medium which significantly differs from the known criteria. In case of vector interpretation of rank-2 symmetric tensors yield surface in the Cauchy stress space is formed by closed piecewise concave surfaces of its deviator sections with due account of experimental data. Section surface is determined by normal vector which is selected from two private vectors of criterial “deviator” operator. Such selection is not always possible in the case of anisotropy growth. It is expected that destruction can only start when a process point in the stress space is located in the current deviator section of the yield surface. It occurs when a critical point appears in the section, and a private value of an operator becomes N-fold in the point that determines the private vector corresponding to the normal vector. Unique and reasonable selection of the normal vector becomes impossible in the critical point and an yield criteria loses its significance in the point.When the destruction initiation is determined there is a possibility of a special case due to the proposed conic form of the yield surface. The deviator section degenerates into the point at the yield surface peak. Criterion formulation at the surface peak lies in the fact that there is no physically correct solution while using a state equation in regard to elastic distortion measures with a fixed tensor of elastic turn. Such usage of the equation is always possible for the rest points of the yield surface and it is considered as an obligatory condition for determination of the deviator section. A critical point is generally absent at any deviator section of the yield surface for isotropic material. A limiting value of the mean stress has been calculated at uniform tension.

  3. Characteristics of Criterion-Referenced Instruments: Implications for Materials Selection for the Learning Disabled.

    Science.gov (United States)

    Blasi, Joyce F.

    Discussed are characteristics of criterion referenced reading tests for use with learning disabled (LD) children, and analyzed are the Basic Educational Skills Inventory (BESI), the Prescriptive Reading Inventory (PRI), and the Cooper-McGuire Diagnostic Work-Analysis Test (CooperMcGuire). Criterion referenced tests are defined; and problems in…

  4. Latent Trait Model Contributions to Criterion-Referenced Testing Technology.

    Science.gov (United States)

    1982-02-01

    levels of ability (ranging from very low to very high). The steps in the reserach were as follows: 1. Specify the characteristics of a "typical" pool...conventional testing methodologies displayed good fit to both of the latent trait models. The one-parameter model compared favorably with the three- parameter... Methodological developments: New directions for testing a!nd measurement (No. 4). San Francisco: Jossey-Bass, 1979. Haubleton, R. K. Advances in

  5. [On the problems of the evolutionary optimization of life history. II. To justification of optimization criterion for nonlinear Leslie model].

    Science.gov (United States)

    Pasekov, V P

    2013-03-01

    The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.

  6. Fulfillment of the kinetic Bohm criterion in a quasineutral particle-in-cell model

    International Nuclear Information System (INIS)

    Ahedo, Eduardo; Santos, Robert; Parra, Felix I.

    2010-01-01

    Quasineutral particle-in-cell models of ions must fulfill the kinetic Bohm criterion, in its inequality form, at the domain boundary in order to match correctly with solutions of the Debye sheaths tied to the walls. The simple, fluid form of the Bohm criterion is shown to be a bad approximation of the exact, kinetic form when the ion velocity distribution function has a significant dispersion and involves different charge numbers. The fulfillment of the Bohm criterion is measured by a weighting algorithm at the boundary, but linear weighting algorithms have difficulties to reproduce the nonlinear behavior around the sheath edge. A surface weighting algorithm with an extended temporal weighting is proposed and shown to behave better than the standard volumetric weighting. Still, this must be supplemented by a forcing algorithm of the kinetic Bohm criterion. This postulates a small potential fall in a supplementary, thin, transition layer. The electron-wall interaction is shown to be of little relevance in the fulfillment of the Bohm criterion.

  7. Ginsburg criterion for an equilibrium superradiant model in the dynamic approach

    International Nuclear Information System (INIS)

    Trache, M.

    1991-10-01

    Some critical properties of an equilibrium superradiant model are discussed, taking into account the quantum fluctuations of the field variables. The critical region is calculated using the Ginsburg criterion, underlining the role of the atomic concentration as a control parameter of the phase transition. (author). 16 refs, 1 fig

  8. Does the Lowest Bid Price Evaluation Criterion Make for a More Efficient Public Procurement Selection Criterion? (Case of the Czech Republic

    Directory of Open Access Journals (Sweden)

    Ochrana František

    2015-06-01

    Full Text Available Through the institute of public procurement a considerable volume of financial resources is allocated. It is therefore in the interest of contracting entities to seek ways of how to achieve an efficient allocation of resources. Some public contract-awarding entities, along with some public-administration authorities in the Czech Republic, believe that the use of a single evaluation criterion (the lowest bid price results in a more efficient tender for a public contract. It was found that contracting entities in the Czech Republic strongly prefer to use the lowest bid price criterion. Within the examined sample, 86.5 % of public procurements were evaluated this way. The analysis of the examined sample of public contracts proved that the choice of an evaluation criterion, even the preference of the lowest bid price criterion, does not have any obvious impact on the final cost of a public contract. The study concludes that it is inappropriate to prefer the criterion of the lowest bid price within the evaluation of public contracts that are characterised by their complexity (including public contracts for construction works and public service contracts. The findings of the Supreme Audit Office related to the inspection of public contracts indicate that when using the lowest bid price as an evaluation criterion, a public contract may indeed be tendered with the lowest bid price, but not necessarily the best offer in terms of supplied quality. It is therefore not appropriate to use the lowest bid price evaluation criterion to such an extent for the purpose of evaluating work and services. Any improvement to this situation requires a corresponding amendment to the Law on Public Contracts and mainly a radical change in the attitude of the Office for the Protection of Competition towards proposed changes, as indicated within the conclusions and recommendations proposed by this study.

  9. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  10. Continuous-Time Portfolio Selection and Option Pricing under Risk-Minimization Criterion in an Incomplete Market

    Directory of Open Access Journals (Sweden)

    Xinfeng Ruan

    2013-01-01

    Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.

  11. Experiments and modeling of ballistic penetration using an energy failure criterion

    Directory of Open Access Journals (Sweden)

    Dolinski M.

    2015-01-01

    Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the

  12. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  13. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  14. Building a maintenance policy through a multi-criterion decision-making model

    Science.gov (United States)

    Faghihinia, Elahe; Mollaverdi, Naser

    2012-08-01

    A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.

  15. Key Determinant Derivations for Information Technology Disaster Recovery Site Selection by the Multi-Criterion Decision Making Method

    Directory of Open Access Journals (Sweden)

    Chia-Lee Yang

    2015-05-01

    Full Text Available Disaster recovery sites are an important mechanism in continuous IT system operations. Such mechanisms can sustain IT availability and reduce business losses during natural or human-made disasters. Concerning the cost and risk aspects, the IT disaster-recovery site selection problems are multi-criterion decision making (MCDM problems in nature. For such problems, the decision aspects include the availability of the service, recovery time requirements, service performance, and more. The importance and complexities of IT disaster recovery sites increases with advances in IT and the categories of possible disasters. The modern IT disaster recovery site selection process requires further investigation. However, very few researchers tried to study related issues during past years based on the authors’ extremely limited knowledge. Thus, this paper aims to derive the aspects and criteria for evaluating and selecting a modern IT disaster recovery site. A hybrid MCDM framework consisting of the Decision Making Trial and Evaluation Laboratory (DEMATEL and the Analytic Network Process (ANP will be proposed to construct the complex influence relations between aspects as well as criteria and further, derive weight associated with each aspect and criteria. The criteria with higher weight can be used for evaluating and selecting the most suitable IT disaster recovery sites. In the future, the proposed analytic framework can be used for evaluating and selecting a disaster recovery site for data centers by public institutes or private firms.

  16. Advance Pricing Agreements and the Selectivity Criterion in EU State Aid Rules

    OpenAIRE

    Härö, O

    2017-01-01

    The Commission of the EU has recently decided that Advance Pricing Agreement rulings (the APA rulings) that Ireland, Luxembourg and the Netherlands have granted for Apple, Fiat and Starbucks (respectively) constitute illegal State aid according to Article 107 of the Treaty on the Functioning of the European Union (TFEU). The Commission claims that the APA rulings deviate from the arm´s length principle and that they grant economic benefit for the beneficiary undertakings in a selective manner...

  17. Heuristics and Criterion Setting during Selective Encoding in Visual Decision-Making: Evidence from Eye Movements.

    Science.gov (United States)

    Schotter, Elizabeth R; Gerety, Cainen; Rayner, Keith

    2012-01-01

    When making a decision, people spend longer looking at the option they ultimately choose compared other options-termed the gaze bias effect-even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, on-line during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgments about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same color content (e.g., both in color or both in black-and-white) or whether they differed in color content and the extent to which color content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the color content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information on-line.

  18. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  19. Growth kinetic and fuel quality parameters as selective criterion for screening biodiesel producing cyanobacterial strains.

    Science.gov (United States)

    Gayathri, Manickam; Shunmugam, Sumathy; Mugasundari, Arumugam Vanmathi; Rahman, Pattanathu K S M; Muralitharan, Gangatharan

    2018-01-01

    The efficiency of cyanobacterial strains as biodiesel feedstock varies with the dwelling habitat. Fourteen indigenous heterocystous cyanobacterial strains from rice field ecosystem were screened based on growth kinetic and fuel parameters. The highest biomass productivity was obtained in Nostoc punctiforme MBDU 621 (19.22mg/L/day) followed by Calothrix sp. MBDU 701 (13.43mg/L/day). While lipid productivity and lipid content was highest in Nostoc spongiaeforme MBDU 704 (4.45mg/L/day and 22.5%dwt) followed by Calothrix sp. MBDU 701 (1.54mg/L/day and 10.75%dwt). Among the tested strains, Nostoc spongiaeforme MBDU 704 and Nostoc punctiforme MBDU 621 were selected as promising strains for good quality biodiesel production by Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA) analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Modeling a failure criterion for U-Mo/Al dispersion fuel

    Science.gov (United States)

    Oh, Jae-Yong; Kim, Yeon Soo; Tahk, Young-Wook; Kim, Hyun-Jung; Kong, Eui-Hyun; Yim, Jeong-Sik

    2016-05-01

    The breakaway swelling in U-Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U-Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO-4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE, E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.

  1. Modeling a failure criterion for U–Mo/Al dispersion fuel

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong, E-mail: tylor@kaeri.re.kr [Korea Atomic Energy Research Institute, 111, Daedeok-Daero 989 Beon-Gil, Yuseong-Gu, Daejeon 305-353 (Korea, Republic of); Kim, Yeon Soo [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Tahk, Young-Wook; Kim, Hyun-Jung; Kong, Eui-Hyun; Yim, Jeong-Sik [Korea Atomic Energy Research Institute, 111, Daedeok-Daero 989 Beon-Gil, Yuseong-Gu, Daejeon 305-353 (Korea, Republic of)

    2016-05-15

    The breakaway swelling in U–Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U–Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO-4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE, E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.

  2. An integrative analysis of reprogramming in human isogenic system identified a clone selection criterion.

    Science.gov (United States)

    Shutova, Maria V; Surdina, Anastasia V; Ischenko, Dmitry S; Naumov, Vladimir A; Bogomazova, Alexandra N; Vassina, Ekaterina M; Alekseev, Dmitry G; Lagarkova, Maria A; Kiselev, Sergey L

    2016-01-01

    The pluripotency of newly developed human induced pluripotent stem cells (iPSCs) is usually characterized by physiological parameters; i.e., by their ability to maintain the undifferentiated state and to differentiate into derivatives of the 3 germ layers. Nevertheless, a molecular comparison of physiologically normal iPSCs to the "gold standard" of pluripotency, embryonic stem cells (ESCs), often reveals a set of genes with different expression and/or methylation patterns in iPSCs and ESCs. To evaluate the contribution of the reprogramming process, parental cell type, and fortuity in the signature of human iPSCs, we developed a complete isogenic reprogramming system. We performed a genome-wide comparison of the transcriptome and the methylome of human isogenic ESCs, 3 types of ESC-derived somatic cells (fibroblasts, retinal pigment epithelium and neural cells), and 3 pairs of iPSC lines derived from these somatic cells. Our analysis revealed a high input of stochasticity in the iPSC signature that does not retain specific traces of the parental cell type and reprogramming process. We showed that 5 iPSC clones are sufficient to find with 95% confidence at least one iPSC clone indistinguishable from their hypothetical isogenic ESC line. Additionally, on the basis of a small set of genes that are characteristic of all iPSC lines and isogenic ESCs, we formulated an approach of "the best iPSC line" selection and confirmed it on an independent dataset.

  3. Bacterial selection for biological control of plant disease: criterion determination and validation

    Directory of Open Access Journals (Sweden)

    Monalize Salete Mota

    Full Text Available Abstract This study aimed to evaluate the biocontrol potential of bacteria isolated from different plant species and soils. The production of compounds related to phytopathogen biocontrol and/or promotion of plant growth in bacterial isolates was evaluated by measuring the production of antimicrobial compounds (ammonia and antibiosis and hydrolytic enzymes (amylases, lipases, proteases, and chitinases and phosphate solubilization. Of the 1219 bacterial isolates, 92% produced one or more of the eight compounds evaluated, but only 1% of the isolates produced all the compounds. Proteolytic activity was most frequently observed among the bacterial isolates. Among the compounds which often determine the success of biocontrol, 43% produced compounds which inhibit mycelial growth of Monilinia fructicola, but only 11% hydrolyzed chitin. Bacteria from different plant species (rhizosphere or phylloplane exhibited differences in the ability to produce the compounds evaluated. Most bacterial isolates with biocontrol potential were isolated from rhizospheric soil. The most efficient bacteria (producing at least five compounds related to phytopathogen biocontrol and/or plant growth, 86 in total, were evaluated for their biocontrol potential by observing their ability to kill juvenile Mesocriconema xenoplax. Thus, we clearly observed that bacteria that produced more compounds related to phytopathogen biocontrol and/or plant growth had a higher efficacy for nematode biocontrol, which validated the selection strategy used.

  4. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  5. Surrogate screening models for the low physical activity criterion of frailty.

    Science.gov (United States)

    Eckel, Sandrah P; Bandeen-Roche, Karen; Chaves, Paulo H M; Fried, Linda P; Louis, Thomas A

    2011-06-01

    Low physical activity, one of five criteria in a validated clinical phenotype of frailty, is assessed by a standardized, semiquantitative questionnaire on up to 20 leisure time activities. Because of the time demanded to collect the interview data, it has been challenging to translate to studies other than the Cardiovascular Health Study (CHS), for which it was developed. Considering subsets of activities, we identified and evaluated streamlined surrogate assessment methods and compared them to one implemented in the Women's Health and Aging Study (WHAS). Using data on men and women ages 65 and older from the CHS, we applied logistic regression models to rank activities by "relative influence" in predicting low physical activity.We considered subsets of the most influential activities as inputs to potential surrogate models (logistic regressions). We evaluated predictive accuracy and predictive validity using the area under receiver operating characteristic curves and assessed criterion validity using proportional hazards models relating frailty status (defined using the surrogate) to mortality. Walking for exercise and moderately strenuous household chores were highly influential for both genders. Women required fewer activities than men for accurate classification. The WHAS model (8 CHS activities) was an effective surrogate, but a surrogate using 6 activities (walking, chores, gardening, general exercise, mowing and golfing) was also highly predictive. We recommend a 6 activity questionnaire to assess physical activity for men and women. If efficiency is essential and the study involves only women, fewer activities can be included.

  6. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    Science.gov (United States)

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    International Nuclear Information System (INIS)

    Alves, J.L.; Oliveira, M.C.; Menezes, L.F.

    2004-01-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results

  8. A model expansion criterion for treating surface topography in ray path calculations using the eikonal equation

    International Nuclear Information System (INIS)

    Ma, Ting; Zhang, Zhongjie

    2014-01-01

    Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)

  9. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Directory of Open Access Journals (Sweden)

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  10. Criterion for the selection of a system of treatment of residues and their application to the wine of the alcoholic industry

    International Nuclear Information System (INIS)

    Caicedo M, Luis Alfonso; Fonseca, Jose Joaquin; Rodriguez, Gerardo

    1996-01-01

    The selection of a system of residues treatment should follow the criterion of the process denominated BATEA (better available and feasible technical and economical process). Because their application is difficult for not having objective parameters of evaluation, a method is presented that classifies the evaluation criterions in general and specific. For the quantification of these aspects factors like FQO are used, FCI, FTR, FD and the factor of applicability of the treatment (FAT). The method applied to the wine; allows concluding that it is the evaporation the best treatment system for this process, while other systems are not developed or it increases the recompensing rate

  11. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  12. Electronic Devices, Methods, and Computer Program Products for Selecting an Antenna Element Based on a Wireless Communication Performance Criterion

    DEFF Research Database (Denmark)

    2014-01-01

    A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception...... and transmission path and a second one of the plurality of antenna elements to a diversity wireless signal reception path based on the performance evaluation....

  13. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  14. [X-ray evaluation of renal function in children with hydronephrosis as a criterion in the selection of therapeutic tactics].

    Science.gov (United States)

    Bosin, V Iu; Murvanidze, D D; Sturua, D G; Nabokov, A K; Soloshenko, V N

    1989-01-01

    The anatomic parameters of the kidneys and the rate of glomerular filtration were measured in 77 children with unilateral hydronephrosis and in 27 children with nonobstructive diseases of the urinary tract according to the clearance of an opaque medium during excretory urography. Alterations in the anatomic parameters of the kidneys in obstructive affection did not reflect the gravity of functional disorders. It has been established that there is a possibility of carrying out a separate assessment of filtration function of the hydronephrotic and contralateral kidneys. A new diagnostic criterion is offered, namely an index of relative clearance, which enables one to measure the degree of compensatory phenomena in the preserved glomeruli and the extent of sclerotic process. It has been demonstrated that accurate measurement of the functional parameters of the affected kidney should underlie the treatment choice in children with unilateral hydronephrosis.

  15. Evaluation of probabilistic flow predictions in sewer systems using grey box models and a skill score criterion

    DEFF Research Database (Denmark)

    Thordarson, Fannar Ørn; Breinholt, Anders; Møller, Jan Kloppenborg

    2012-01-01

    term and a diffusion term, respectively accounting for the deterministic and stochastic part of the models. Furthermore, a distinction is made between the process noise and the observation noise. We compare five different model candidates’ predictive performances that solely differ with respect...... to the diffusion term description up to a 4 h prediction horizon by adopting the prediction performance measures; reliability, sharpness and skill score to pinpoint the preferred model. The prediction performance of a model is reliable if the observed coverage of the prediction intervals corresponds to the nominal...... coverage of the prediction intervals, i.e. the bias between these coverages should ideally be zero. The sharpness is a measure of the distance between the lower and upper prediction limits, and skill score criterion makes it possible to pinpoint the preferred model by taking into account both reliability...

  16. Evaluation criterions and establishment of dry eye model of rats induced by BTX-B

    Directory of Open Access Journals (Sweden)

    Hai-Feng Zhu

    2015-09-01

    Full Text Available AIM: To establish dry eye model of rats induced by botulinum toxin B(BTX-Band provide the basis for the pathogenesis and experimental treatment of dry eye caused by inflammation. METHODS: Thirty-six healthy female SD rats were selected and divided into four groups randomly, and the experimental group included three groups, which were individually injected 0.1mL 1.25, 5, and 10mU BTX-B solution on the right lacrimal gland; the control group was injected 0.1mL normal saline on the right lacrimal gland, then received Schirmer Ⅰ test(SⅠtand examination of corneal fluorescein(FLstaining respectively at the 3, 7, 14 and 28d. The other 32 rats were selected and divided into two groups randomly, the rats in the experimental group were injected 0.1mL 1.25mU BTX-B solution on the right lacrimal gland and then five rats were randomly chosen to be removed lacrimal gland tissue respectively at the 3, 7, 14, 21, 42d. The Lacritin protein was detected in the qualitative and quantitative way by immunofluorescence and Western-blot, and the histopathological test was received by routine HE staining. RESULTS: The three groups in the experimental group during the preparation of the model appeared that tear secretions decreased and corneal epithelium got damaged at 3d, but there was no significant difference for each other of two changes(P>0.05. The change was reached the peak at 7d and improved at 14d. The tear secretions returned to normal level at 28d, but the damage of corneal epithelium was still existed. The expression of Lacritin protein was only observed in acinar cells of both experimental group and control group, and the content of Lacritin protein in the experimental group decreased significantly. The decreasing situation appeared at 3d, reached the peak at 7d, improved at 14d, began to recover at 28d, and returned to the normal level at 42d. CONCLUSION: Dry eye model of SD rats can be successfully established by lacrimal gland injection of 1.25mU BTX

  17. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  18. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  19. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  20. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  1. The Distributed Criterion Design

    Science.gov (United States)

    McDougall, Dennis

    2006-01-01

    This article describes and illustrates a novel form of the changing criterion design called the distributed criterion design, which represents perhaps the first advance in the changing criterion design in four decades. The distributed criterion design incorporates elements of the multiple baseline and A-B-A-B designs and is well suited to applied…

  2. Recruiter Selection Model

    National Research Council Canada - National Science Library

    Halstead, John B

    2006-01-01

    .... The research uses a combination of statistical learning, feature selection methods, and multivariate statistics to determine the better prediction function approximation with features obtained...

  3. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2012-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  4. Half-trek criterion for generic identifiability of linear structural equation models

    NARCIS (Netherlands)

    Foygel, R.; Draisma, J.; Drton, M.

    2011-01-01

    A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations

  5. A new LP formulation of the admission control problem modelled as an MDP under average reward criterion

    Science.gov (United States)

    Pietrabissa, Antonio

    2011-12-01

    The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C + 1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.

  6. A potential new selection criterion for breeding winter barley optimal protein and amino acid profiles for liquid pig feed

    DEFF Research Database (Denmark)

    Christensen, Jesper Bjerg; Blaabjerg, Karoline; Poulsen, Hanne Damgaard

    The hypothesis is that cereal proteases in liquid feed degrade and convert water insoluble storage protein into water soluble protein, which may improve the digestibility of protein in pigs compared with dry feeding. Protein utilization is increased by matching the amino acid (AAs) content...... of the diet as close as possible to the pigs’ requirement. By improving the availability of isoleucine, leucine, histidine and phenylalanine, which are limiting and commercial unavailable, the amount of crude protein in the pig feed can be reduced, resulting in a decreased excretion of nitrogen. The aim...... of glutamic acid revealed differences between the cultivars and the solubilised protein at all three times. These preliminary results may indicate that improvements of the nitrogen utilization in pigs fed soaked winter barley depends on the choice of cultivar and soaking time, and may serve as a new selection...

  7. Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework

    Science.gov (United States)

    Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.

    2017-12-01

    The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and

  8. Multi-criterion model ensemble of CMIP5 surface air temperature over China

    Science.gov (United States)

    Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming

    2018-05-01

    The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the

  9. Seasonality of mean and heavy precipitation in the area of the Vosges Mountains: dependence on the selection criterion

    Czech Academy of Sciences Publication Activity Database

    Minářová, J.; Müller, Miloslav; Clappier, A.

    2017-01-01

    Roč. 37, č. 5 (2017), s. 2654-2666 ISSN 0899-8418 Institutional support: RVO:68378289 Keywords : climate extremes * model * variability * events * statistics * weather * cops * Vosges Mountains * seasonality * annual course * extreme * heavy rainfall * precipitation * POT * GEV Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.760, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/joc.4871/abstract

  10. Do candidate reactions relate to job performance or affect criterion-related validity? A multistudy investigation of relations among reactions, selection test scores, and job performance.

    Science.gov (United States)

    McCarthy, Julie M; Van Iddekinge, Chad H; Lievens, Filip; Kung, Mei-Chuan; Sinar, Evan F; Campion, Michael A

    2013-09-01

    Considerable evidence suggests that how candidates react to selection procedures can affect their test performance and their attitudes toward the hiring organization (e.g., recommending the firm to others). However, very few studies of candidate reactions have examined one of the outcomes organizations care most about: job performance. We attempt to address this gap by developing and testing a conceptual framework that delineates whether and how candidate reactions might influence job performance. We accomplish this objective using data from 4 studies (total N = 6,480), 6 selection procedures (personality tests, job knowledge tests, cognitive ability tests, work samples, situational judgment tests, and a selection inventory), 5 key candidate reactions (anxiety, motivation, belief in tests, self-efficacy, and procedural justice), 2 contexts (industry and education), 3 continents (North America, South America, and Europe), 2 study designs (predictive and concurrent), and 4 occupational areas (medical, sales, customer service, and technological). Consistent with previous research, candidate reactions were related to test scores, and test scores were related to job performance. Further, there was some evidence that reactions affected performance indirectly through their influence on test scores. Finally, in no cases did candidate reactions affect the prediction of job performance by increasing or decreasing the criterion-related validity of test scores. Implications of these findings and avenues for future research are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved

  11. Model selection in periodic autoregressions

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1994-01-01

    textabstractThis paper focuses on the issue of period autoagressive time series models (PAR) selection in practice. One aspect of model selection is the choice for the appropriate PAR order. This can be of interest for the valuation of economic models. Further, the appropriate PAR order is important

  12. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  13. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  14. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  15. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  16. Quality Practices, Corporate Social Responsibility and the “Society Results” Criterion of the EFQM Model

    Directory of Open Access Journals (Sweden)

    María de la Cruz del Río-Rama

    2017-04-01

    Full Text Available Purpose – The purpose of this research is to analyze whether quality management practices implemented and carried out by the rural accommodation establishments under study influence society results obtained by organizations, which are understood as the participation therein and the development of local community.´ Design/methodology/approach – The working methodology consists of carrying out an exploratory and confirmatory factor analysis in order to test the psychometric properties of measurement scales, and the hypothesized relationships between critical factors and society results are examined using structural equation modeling. Findings – The study provides evidence of a weak relationship between the critical factors of quality and society results in rural accommodation establishments. The results suggest that process management is the only quality practice that has a direct effect on society results and the rest of the critical factors are considered antecedents of it. Originality/value – The contribution of this study, which explores the impact of the critical factors of quality on society results, is to confirm that there is an effect of the critical factors of quality on society results (social and environmental responsibilities through the direct relationship of process management. Very few studies examine this relationship.

  17. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  18. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  19. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  20. Selectivity criterion for pyrazolo[3,4-b]pyrid[az]ine derivatives as GSK-3 inhibitors: CoMFA and molecular docking studies.

    Science.gov (United States)

    Patel, Dhilon S; Bharatam, Prasad V

    2008-05-01

    In the development of drugs targeted for GSK-3, its selective inhibition is an important requirement owing to the possibility of side effects arising from other kinases for the treatment of diabetes mellitus. A three-dimensional quantitative structure-activity relationship study (3D-QSAR) has been carried out on a set of pyrazolo[3,4-b]pyrid[az]ine derivatives, which includes non-selective and selective GSK-3 inhibitors. The CoMFA models were derived from a training set of 59 molecules. A test set containing 14 molecules (not used in model generation) was used to validate the CoMFA models. The best CoMFA model generated by applying leave-one-out (LOO) cross-validation study gave cross-validation r(cv)(2) and conventional r(conv)(2) values of 0.60 and 0.97, respectively, and r(pred)(2) value of 0.55, which provide the predictive ability of model. The developed models well explain (i) the observed variance in the activity and (ii) structural difference between the selective and non-selective GSK-3 inhibitors. Validation based on the molecular docking has also been carried out to explain the structural differences between the selective and non-selective molecules in the given series of molecules.

  1. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  2. Unified Bohm criterion

    Energy Technology Data Exchange (ETDEWEB)

    Kos, L. [LECAD Laboratory, Faculty of Mechanical Engineering, University of Ljubljana, SI-1000 Ljubljana (Slovenia); Tskhakaya, D. D.; Jelić, N. [Institute for Theoretical Physics, Fusion@ÖAW, University of Innsbruck, A-6020 Innsbruck (Austria)

    2015-09-15

    Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function) without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.

  3. A New Infrared Color Criterion for the Selection of 0 < z < 7 AGNs: Application to Deep Fields and Implications for JWST Surveys

    Science.gov (United States)

    Messias, H.; Afonso, J.; Salvato, M.; Mobasher, B.; Hopkins, A. M.

    2012-08-01

    It is widely accepted that observations at mid-infrared (mid-IR) wavelengths enable the selection of galaxies with nuclear activity, which may not be revealed even in the deepest X-ray surveys. Many mid-IR color-color criteria have been explored to accomplish this goal and tested thoroughly in the literature. Besides missing many low-luminosity active galactic nuclei (AGNs), one of the main conclusions is that, with increasing redshift, the contamination by non-active galaxies becomes significant (especially at z >~ 2.5). This is problematic for the study of the AGN phenomenon in the early universe, the main goal of many of the current and future deep extragalactic surveys. In this work new near- and mid-IR color diagnostics are explored, aiming for improved efficiency—better completeness and less contamination—in selecting AGNs out to very high redshifts. We restrict our study to the James Webb Space Telescope wavelength range (0.6-27 μm). The criteria are created based on the predictions by state-of-the-art galaxy and AGN templates covering a wide variety of galaxy properties, and tested against control samples with deep multi-wavelength coverage (ranging from the X-rays to radio frequencies). We show that the colors Ks - [4.5], [4.5] - [8.0], and [8.0] - [24] are ideal as AGN/non-AGN diagnostics at, respectively, z ~ 2.5-3. However, when the source redshift is unknown, these colors should be combined. We thus develop an improved IR criterion (using Ks and IRAC bands, KI) as a new alternative at z 50%-90% level of successful AGN selection). We also propose KIM (using Ks , IRAC, and MIPS 24 μm bands, KIM), which aims to select AGN hosts from local distances to as far back as the end of reionization (0 ~ 2.5. Overall, KIM shows a ~30%-40% completeness and a >70%-90% level of successful AGN selection. KI and KIM are built to be reliable against a ~10%-20% error in flux, are based on existing filters, and are suitable for immediate use.

  4. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  5. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  7. Proposing a model for safety risk assessment in the construction industry using gray multi-criterion decision-making

    Directory of Open Access Journals (Sweden)

    S. M. Abootorabi

    2014-09-01

    Full Text Available Introduction: Statistical Report of the Social Security Organization indicate that among the various industries, the construction industry has the highest number of work-related accidents so that in addition to frequency, it has high intensity, as well. On the other hand, a large number of human resources are working in this whish shows they necessity for paying special attention to these workers. Therefore, risk assessment of the safety in the construction industry is an effective step in this regard. In this study, a method for ranking safety risks in conditions of low number of samples and uncertainty is presented, using gray multi-criterion decision-making. .Material and Method: In this study, we first identified the factors affecting the occurrence of hazards in the construction industry. Then, appropriate for ranking the risks were determined and the problem was defined as a multi-criterion decision-making. In order to weight the criteria and to evaluate alternatives based on each criterion, gray numbers were used. In the last stage, the problem was solved using the gray possibility degree. .Results: The results show that the method of gray multi-criterion decision-making is an effective method for ranking risks in situations of low samples compared with other methods of MCDM. .Conclusion: The proposed method is preferred to fuzzy methods and statistics in uncertain and low sample size, due to simple calculations and no need to define the membership function.

  8. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  9. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  10. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  11. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  12. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  13. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  14. On the Jeans criterion

    International Nuclear Information System (INIS)

    Whitworth, A.P.

    1980-01-01

    The Jeans criterion is first stated and distinguished from the Virial Theorem. Then it is discussed how the Jeans criterion can be derived from the Virial Theorem and the inherent shortcomings in this derivation. Finally, it is indicated how these shortcomings might be overcome. The Jeans criterion is a fragmentation - or condensation -criterion. An expression is given, connecting the fragmentation of an unstable extended medium into masses Msub(J). Rather than picturing the background medium fragmenting, it is probably more appropriate to envisage these masses Msub(J) 'condensing' out of the background medium. On the condensation picture some fraction of the background material separates out into coherent bound nodules under the pull of its self-gravity. For this reason the Jeans criterion is discussed as a condensation condition, reserving the term fragmentation for a different process. The Virial Theorem provides a contraction criterion. This is described with reference to a spherical cloud and is developed to derive the Jeans criterion. (U.K.)

  15. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  16. Critical Length Criterion and the Arc Chain Model for Calculating the Arcing Time of the Secondary Arc Related to AC Transmission Lines

    International Nuclear Information System (INIS)

    Cong Haoxi; Li Qingmin; Xing Jinyuan; Li Jinsong; Chen Qiang

    2015-01-01

    The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. (paper)

  17. Selected Tether Applications Cost Model

    Science.gov (United States)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  18. Plasma sheath criterion in thermal electronegative plasmas

    International Nuclear Information System (INIS)

    Ghomi, Hamid; Khoramabadi, Mansour; Ghorannevis, Mahmod; Shukla, Padma Kant

    2010-01-01

    The sheath formation criterion in electronegative plasma is examined. By using a multifluid model, it is shown that in a collisional sheath there will be upper as well as lower limits for the sheath velocity criterion. However, the parameters of the negative ions only affect the lower limit.

  19. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  20. Inviscid criterion for decomposing scales

    Science.gov (United States)

    Zhao, Dongxiao; Aluie, Hussein

    2018-05-01

    The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.

  1. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    Science.gov (United States)

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Theoretical modeling of CHF for near-saturated pool boiling and flow boiling from short heaters using the interfacial lift-off criterion

    International Nuclear Information System (INIS)

    Mudawar, I.; Galloway, J.E.; Gersey, C.O.

    1995-01-01

    Pool boiling and flow boiling were examined for near-saturated bulk conditions in order to determine the critical heat flux (CHF) trigger mechanism for each. Photographic studies of the wall region revealed features common to both situations. At fluxes below CHF, the vapor coalesces into a wavy layer which permits wetting only in wetting fronts, the portions of the liquid-vapor interface which contact the wall as a result of the interfacial waviness. Close examination of the interfacial features revealed the waves are generated from the lower edge of the heater in pool boiling and the heater's upstream region in flow boiling. Wavelengths follow predictions based upon the Kelvin-Helmholtz instability criterion. Critical heat flux in both cases occurs when the pressure force exerted upon the interface due to interfacial curvature, which tends to preserve interfacial contact with the wall prior to CHF, is overcome by the momentum of vapor at the site of the first wetting front, causing the interface to lift away from the wall. It is shown this interfacial lift-off criterion facilitates accurate theoretical modeling of CHF in pool boiling and in flow boiling in both straight and curved channels

  3. Theoretical modeling of CHF for near-saturated pool boiling and flow boiling from short heaters using the interfacial lift-off criterion

    Energy Technology Data Exchange (ETDEWEB)

    Mudawar, I.; Galloway, J.E.; Gersey, C.O. [Purdue Univ., West Lafayette, IN (United States)] [and others

    1995-12-31

    Pool boiling and flow boiling were examined for near-saturated bulk conditions in order to determine the critical heat flux (CHF) trigger mechanism for each. Photographic studies of the wall region revealed features common to both situations. At fluxes below CHF, the vapor coalesces into a wavy layer which permits wetting only in wetting fronts, the portions of the liquid-vapor interface which contact the wall as a result of the interfacial waviness. Close examination of the interfacial features revealed the waves are generated from the lower edge of the heater in pool boiling and the heater`s upstream region in flow boiling. Wavelengths follow predictions based upon the Kelvin-Helmholtz instability criterion. Critical heat flux in both cases occurs when the pressure force exerted upon the interface due to interfacial curvature, which tends to preserve interfacial contact with the wall prior to CHF, is overcome by the momentum of vapor at the site of the first wetting front, causing the interface to lift away from the wall. It is shown this interfacial lift-off criterion facilitates accurate theoretical modeling of CHF in pool boiling and in flow boiling in both straight and curved channels.

  4. On the Modified Barkhausen Criterion

    DEFF Research Database (Denmark)

    Lindberg, Erik; Murali, K.

    2016-01-01

    Oscillators are normally designed according to the Modified Barkhausen Criterion i.e. the complex pole pair is moved out in RHP so that the linear circuit becomes unstable. By means of the Mancini Phaseshift Oscillator it is demonstrated that the distortion of the oscillator may be minimized by i...... by introducing a nonlinear ”Hewlett Resistor” so that the complex pole-pair is in the RHP for small signals and in the LHP for large signals i.e. the complex pole pair of the instant linearized small signal model is moving around the imaginary axis in the complex frequency plane....

  5. General Criterion for Harmonicity

    Science.gov (United States)

    Proesmans, Karel; Vandebroek, Hans; Van den Broeck, Christian

    2017-10-01

    Inspired by Kubo-Anderson Markov processes, we introduce a new class of transfer matrices whose largest eigenvalue is determined by a simple explicit algebraic equation. Applications include the free energy calculation for various equilibrium systems and a general criterion for perfect harmonicity, i.e., a free energy that is exactly quadratic in the external field. As an illustration, we construct a "perfect spring," namely, a polymer with non-Gaussian, exponentially distributed subunits which, nevertheless, remains harmonic until it is fully stretched. This surprising discovery is confirmed by Monte Carlo and Langevin simulations.

  6. T-S Fuzzy Model-Based Approximation and Filter Design for Stochastic Time-Delay Systems with Hankel Norm Criterion

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available This paper investigates the Hankel norm filter design problem for stochastic time-delay systems, which are represented by Takagi-Sugeno (T-S fuzzy model. Motivated by the parallel distributed compensation (PDC technique, a novel filtering error system is established. The objective is to design a suitable filter that guarantees the corresponding filtering error system to be mean-square asymptotically stable and to have a specified Hankel norm performance level γ. Based on the Lyapunov stability theory and the Itô differential rule, the Hankel norm criterion is first established by adopting the integral inequality method, which can make some useful efforts in reducing conservativeness. The Hankel norm filtering problem is casted into a convex optimization problem with a convex linearization approach, which expresses all the conditions for the existence of admissible Hankel norm filter as standard linear matrix inequalities (LMIs. The effectiveness of the proposed method is demonstrated via a numerical example.

  7. Effects of Initial Values and Convergence Criterion in the Two-Parameter Logistic Model When Estimating the Latent Distribution in BILOG-MG 3.

    Directory of Open Access Journals (Sweden)

    Ingo W Nader

    Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.

  8. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  9. Seismogenic Potential of a Gouge-filled Fault and the Criterion for Its Slip Stability: Constraints From a Microphysical Model

    Science.gov (United States)

    Chen, Jianye; Niemeijer, A. R.

    2017-12-01

    Physical constraints for the parameters of the rate-and-state friction (RSF) laws have been mostly lacking. We presented such constraints based on a microphysical model and demonstrated the general applicability to granular fault gouges deforming under hydrothermal conditions in a companion paper. In this paper, we examine the transition velocities for contrasting frictional behavior (i.e., strengthening to weakening and vice versa) and the slip stability of the model. The model predicts a steady state friction coefficient that increases with slip rate at very low and high slip rates and decreases in between. This allows the transition velocities to be theoretically obtained and the unstable slip regime (Vs→w static stress drop (Δμs) associated with self-sustained oscillations or stick slips. Numerical implementation of the model predicts frictional behavior that exhibits consecutive transitions from stable sliding, via periodic oscillations, to unstable stick slips with decreasing elastic stiffness or loading rate, and gives Kc, Wc, Δμs, Vs→w, and Vw→s values that are consistent with the analytical predictions. General scaling relations of these parameters given by the model are consistent with previous interpretations in the context of RSF laws and agree well with previous experiments, testifying to high validity. From these physics-based expressions that allow a more reliable extrapolation to natural conditions, we discuss the seismological implications for natural faults and present topics for future work.

  10. Selected sports talent development models

    Directory of Open Access Journals (Sweden)

    Michal Vičar

    2017-06-01

    Full Text Available Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmental model of sport participation, Csikszentmihalyi - The flow model of optimal expertise, Bailey and Morley - Model of talent development. Conclusion: Current models of sport talent development approach talent as dynamic phenomenon, varying in time. They are based in particular on the work of Simonton and his Emergenic and epigenic model and of Gagné and his Differentiated model of giftedness and talent. Balyi's model is characterised by its applicability and impications for practice. Côté's model highlights the role of family and deliberate play. Both models describe periodization of talent development. Csikszentmihalyi's flow model explains how the athlete acquires experience and develops during puberty based on the structure of attention and flow experience. Bailey and Morley's model accents the situational approach to talent and development of skills facilitating its growth.

  11. Selected sports talent development models

    OpenAIRE

    Michal Vičar

    2017-01-01

    Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...

  12. Activation energies as the validity criterion of a model for complex reactions that can be in oscillatory states

    Directory of Open Access Journals (Sweden)

    Anić S.

    2007-01-01

    Full Text Available Modeling of any complex reaction system is a difficult task. If the system under examination can be in various oscillatory dynamic states, the apparent activation energies corresponding to different pathways may be of crucial importance for this purpose. In that case the activation energies can be determined by means of the main characteristics of an oscillatory process such as pre-oscillatory period, duration of the oscillatory period, the period from the beginning of the process to the end of the last oscillation, number of oscillations and others. All is illustrated on the Bray-Liebhafsky oscillatory reaction.

  13. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion

    Science.gov (United States)

    2013-01-01

    Background Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. Results In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove

  14. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  15. A new objective criterion for IRIS localization

    International Nuclear Information System (INIS)

    Basit, A.

    2010-01-01

    Iris localization is the most important step in iris recognition systems. For commonly used databases, exact data is not given which describe the true results of localization. To cope with this problem a new objective criterion for iris localization is proposed in this paper based on our visual system. A specific number of points are selected on pupil boundary, iris boundary, upper eyelid and lower eyelid using the original image and then distance from these points to the result of complete iris localization has been calculated. If the determined distance is below a certain threshold then iris localization is considered correct. Experimental results show that proposed criterion is very effective. (author)

  16. Stochastic Learning and the Intuitive Criterion in Simple Signaling Games

    DEFF Research Database (Denmark)

    Sloth, Birgitte; Whitta-Jacobsen, Hans Jørgen

    A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion......A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion...

  17. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  18. Bovine Host Genetic Variation Influences Rumen Microbial Methane Production with Best Selection Criterion for Low Methane Emitting and Efficiently Feed Converting Hosts Based on Metagenomic Gene Abundance.

    Directory of Open Access Journals (Sweden)

    Rainer Roehe

    2016-02-01

    Full Text Available Methane produced by methanogenic archaea in ruminants contributes significantly to anthropogenic greenhouse gas emissions. The host genetic link controlling microbial methane production is unknown and appropriate genetic selection strategies are not developed. We used sire progeny group differences to estimate the host genetic influence on rumen microbial methane production in a factorial experiment consisting of crossbred breed types and diets. Rumen metagenomic profiling was undertaken to investigate links between microbial genes and methane emissions or feed conversion efficiency. Sire progeny groups differed significantly in their methane emissions measured in respiration chambers. Ranking of the sire progeny groups based on methane emissions or relative archaeal abundance was consistent overall and within diet, suggesting that archaeal abundance in ruminal digesta is under host genetic control and can be used to genetically select animals without measuring methane directly. In the metagenomic analysis of rumen contents, we identified 3970 microbial genes of which 20 and 49 genes were significantly associated with methane emissions and feed conversion efficiency respectively. These explained 81% and 86% of the respective variation and were clustered in distinct functional gene networks. Methanogenesis genes (e.g. mcrA and fmdB were associated with methane emissions, whilst host-microbiome cross talk genes (e.g. TSTA3 and FucI were associated with feed conversion efficiency. These results strengthen the idea that the host animal controls its own microbiota to a significant extent and open up the implementation of effective breeding strategies using rumen microbial gene abundance as a predictor for difficult-to-measure traits on a large number of hosts. Generally, the results provide a proof of principle to use the relative abundance of microbial genes in the gastrointestinal tract of different species to predict their influence on traits e

  19. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  20. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  1. Value of prostate specific antigen and prostatic volume ratio (PSA/V) as the selection criterion for US-guided prostatic biopsy

    International Nuclear Information System (INIS)

    Veneziano, S.; Paulica, P.; Querze', R.; Viglietta, G.; Trenta, A.

    1991-01-01

    US-guided biopsy was performed in 94 patients with suspected lesions at transerectal US. Histology demonstrated carcinoma in 43 cases, benign hyperplasia in 44, and prostatis in 7. In all cases the prostate specific antigen (PSA) was calculated, by means of US, together with prostatic volume (v). PSA was related to the corresponding gland volume, which resulted in PSA/V ratio. Our study showed PSA/V ration to have higher sensitivity and specificity than absolulute PSA value in the diagnosis of prostatic carcinoma. The authors believe prostate US-guided biopsy to be: a) necessary when the suspected area has PSA/V ratio >0.15, and especially when PSA/V >0.30; b) not indicated when echo-structural alterations are associated with PSA/V <0.15, because they are most frequently due to benign lesions. The combined use of PSA/V ratio and US is therefore suggested to select the patients in whom biopsy is to be performed

  2. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  3. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    Science.gov (United States)

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  4. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  5. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  6. Unitary Evolution as a Uniqueness Criterion

    Science.gov (United States)

    Cortez, J.; Mena Marugán, G. A.; Olmedo, J.; Velhinho, J. M.

    2015-01-01

    It is well known that the process of quantizing field theories is plagued with ambiguities. First, there is ambiguity in the choice of basic variables describing the system. Second, once a choice of field variables has been made, there is ambiguity concerning the selection of a quantum representation of the corresponding canonical commutation relations. The natural strategy to remove these ambiguities is to demand positivity of energy and to invoke symmetries, namely by requiring that classical symmetries become unitarily implemented in the quantum realm. The success of this strategy depends, however, on the existence of a sufficiently large group of symmetries, usually including time-translation invariance. These criteria are therefore generally insufficient in non-stationary situations, as is typical for free fields in curved spacetimes. Recently, the criterion of unitary implementation of the dynamics has been proposed in order to select a unique quantization in the context of manifestly non-stationary systems. Specifically, the unitarity criterion, together with the requirement of invariance under spatial symmetries, has been successfully employed to remove the ambiguities in the quantization of linearly polarized Gowdy models as well as in the quantization of a scalar field with time varying mass, propagating in a static background whose spatial topology is either of a d-sphere (with d = 1, 2, 3) or a three torus. Following Ref. 3, we will see here that the symmetry and unitarity criteria allows for a complete removal of the ambiguities in the quantization of scalar fields propagating in static spacetimes with compact spatial sections, obeying field equations with an explicitly time-dependent mass, of the form ddot φ - Δ φ + s(t)φ = 0 . These results apply in particular to free fields in spacetimes which, like e.g. in the closed FRW models, are conformal to a static spacetime, by means of an exclusively time-dependent conformal factor. In fact, in such

  7. A Gambler's Model of Natural Selection.

    Science.gov (United States)

    Nolan, Michael J.; Ostrovsky, David S.

    1996-01-01

    Presents an activity that highlights the mechanism and power of natural selection. Allows students to think in terms of modeling a biological process and instills an appreciation for a mathematical approach to biological problems. (JRH)

  8. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  9. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  10. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  11. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  12. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  13. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  14. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  15. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  16. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  17. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  18. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  19. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  20. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  1. Is body weight the most appropriate criterion to select patients eligible for low-dose pulmonary CT angiography? Analysis of objective and subjective image quality at 80 kVp in 100 patients

    Energy Technology Data Exchange (ETDEWEB)

    Szucs-Farkas, Zsolt; Strautz, Tamara; Patak, Michael A.; Kurmann, Luzia; Vock, Peter; Schindera, Sebastian T. [University Hospital and University of Berne, Department of Diagnostic, Interventional and Paediatric Radiology, Berne (Switzerland)

    2009-08-15

    The objective of this retrospective study was to assess image quality with pulmonary CT angiography (CTA) using 80 kVp and to find anthropomorphic parameters other than body weight (BW) to serve as selection criteria for low-dose CTA. Attenuation in the pulmonary arteries, anteroposterior and lateral diameters, cross-sectional area and soft-tissue thickness of the chest were measured in 100 consecutive patients weighing less than 100 kg with 80 kVp pulmonary CTA. Body surface area (BSA) and contrast-to-noise ratios (CNR) were calculated. Three radiologists analyzed arterial enhancement, noise, and image quality. Image parameters between patients grouped by BW (group 1: 0-50 kg; groups 2-6: 51-100 kg, decadelly increasing) were compared. CNR was higher in patients weighing less than 60 kg than in the BW groups 71-99 kg (P between 0.025 and <0.001). Subjective ranking of enhancement (P=0.165-0.605), noise (P=0.063), and image quality (P=0.079) did not differ significantly across all patient groups. CNR correlated moderately strongly with weight (R=-0.585), BSA (R=-0.582), cross-sectional area (R=-0.544), and anteroposterior diameter of the chest (R=-0.457; P<0.001 all parameters). We conclude that 80 kVp pulmonary CTA permits diagnostic image quality in patients weighing up to 100 kg. Body weight is a suitable criterion to select patients for low-dose pulmonary CTA. (orig.)

  2. Suboptimal Criterion Learning in Static and Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Elyse H Norton

    2017-01-01

    Full Text Available Humans often make decisions based on uncertain sensory information. Signal detection theory (SDT describes detection and discrimination decisions as a comparison of stimulus "strength" to a fixed decision criterion. However, recent research suggests that current responses depend on the recent history of stimuli and previous responses, suggesting that the decision criterion is updated trial-by-trial. The mechanisms underpinning criterion setting remain unknown. Here, we examine how observers learn to set a decision criterion in an orientation-discrimination task under both static and dynamic conditions. To investigate mechanisms underlying trial-by-trial criterion placement, we introduce a novel task in which participants explicitly set the criterion, and compare it to a more traditional discrimination task, allowing us to model this explicit indication of criterion dynamics. In each task, stimuli were ellipses with principal orientations drawn from two categories: Gaussian distributions with different means and equal variance. In the covert-criterion task, observers categorized a displayed ellipse. In the overt-criterion task, observers adjusted the orientation of a line that served as the discrimination criterion for a subsequently presented ellipse. We compared performance to the ideal Bayesian learner and several suboptimal models that varied in both computational and memory demands. Under static and dynamic conditions, we found that, in both tasks, observers used suboptimal learning rules. In most conditions, a model in which the recent history of past samples determines a belief about category means fit the data best for most observers and on average. Our results reveal dynamic adjustment of discrimination criterion, even after prolonged training, and indicate how decision criteria are updated over time.

  3. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  4. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  5. Comparison of Desertification Intensity in the Purified Wastewater Irrigated Lands with Normal Lands in Yazd Using of Soil Criterion of the IMDPA Model

    Directory of Open Access Journals (Sweden)

    M. Yektafar

    2016-09-01

    Full Text Available Introduction: Desertification, is a complex phenomenon, which as environmental, socio-economical, and cultural impacts on natural resources. According to the United Nations Convention to Combat Desertification defination, desertification is land degradation in arid, semi-arid, and dry sub-humid regions, resulting from climate change and human activities. Because of limiting access to qualified water resources in arid lands, it is necessary to use, all forms of acceptable water resources such as wastewater. Since irrigation with sewages has most effects on soil, in this research, desertification intensity of lands irrigated with sewages and natural lands of the area, where located near Yazd city, has been analyzed considering soil criterion of the Iranian Model for Desertification Potential Assessment (IMDPA. Several studies have done in Iran and in the world in order to provide national, regional or global desertification assessment models. A significant feature of the IMDPA is easily defining and measuring criteria, indicators, and ability of the model to use geometric means for the criteria and indicators. Materials and Methods: In first step, In first step, in a random method, soil samples were taken in each of the defined land units with considering of the size of area. Next, all indices related to the soil criterion such as soil texture index, soil deep gravel percentage, soil depth, and soil electrical conductivity were evaluated in each land use (both irrigated lands and natural lands and weighted considering the present conditions of the lands. Each index was scored according to the standard table of soil that categorized desertification. Then, geometry average of all indices were calculated and map of the desertification intensity of the study area were prepared. Thus, four maps were prepared according to each index. These maps were used to study both quality and effect of each index on desertification. Finally, these maps were

  6. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me...

  7. A Failure Criterion for Concrete

    DEFF Research Database (Denmark)

    Ottosen, N. S.

    1977-01-01

    A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...

  8. Selection of an appropriately simple storm runoff model

    Directory of Open Access Journals (Sweden)

    A. I. J. M. van Dijk

    2010-03-01

    Full Text Available An appropriately simple event runoff model for catchment hydrological studies was derived. The model was selected from several variants as having the optimum balance between simplicity and the ability to explain daily observations of streamflow from 260 Australian catchments (23–1902 km2. Event rainfall and runoff were estimated from the observations through a combination of baseflow separation and storm flow recession analysis, producing a storm flow recession coefficient (kQF. Various model structures with up to six free parameters were investigated, covering most of the equations applied in existing lumped catchment models. The performance of alternative structures and free parameters were expressed in Aikake's Final Prediction Error Criterion (FPEC and corresponding Nash-Sutcliffe model efficiencies (NSME for event runoff totals. For each model variant, the number of free parameters was reduced in steps based on calculated parameter sensitivity. The resulting optimal model structure had two or three free parameters; the first describing the non-linear relationship between event rainfall and runoff (Smax, the second relating runoff to antecedent groundwater storage (CSg, and a third that described initial rainfall losses (Li, but which could be set at 8 mm without affecting model performance too much. The best three parameter model produced a median NSME of 0.64 and outperformed, for example, the Soil Conservation Service Curve Number technique (median NSME 0.30–0.41. Parameter estimation in ungauged catchments is likely to be challenging: 64% of the variance in kQF among stations could be explained by catchment climate indicators and spatial correlation, but corresponding numbers were a modest 45% for CSg, 21% for Smax and none for Li, respectively. In gauged catchments, better

  9. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling

    DEFF Research Database (Denmark)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief

    2018-01-01

    by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded...

  10. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  11. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    Science.gov (United States)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  12. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  13. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  14. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  15. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon; Maadooliat, Mehdi; Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2015-01-01

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  16. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  17. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  18. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  19. Distance criterion for hydrogen bond

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.

  20. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  4. Time to Criterion: An Experimental Study.

    Science.gov (United States)

    Anderson, Lorin W.

    The purpose of the study was to investigate the magnitude of individual differences in time-to-criterion and the stability of these differences. Time-to-criterion was defined in two ways: the amount of elapsed time required to attain the criterion level and the amount of on-task time required to attain the criterion level. Ninety students were…

  5. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  6. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  7. The applicability of fair selection models in the South African context

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    1995-06-01

    Full Text Available This article reviews several models that are aimed at achieving fair selection in situations in which underrepresented groups tend to obtain lower scores on selection tests. Whereas predictive bias is a statistical concept that refers to systematic errors in the prediction of individuals' criterion scores, selection fairness pertains to the extent to which selection results meet certain socio-political demands. The regression and equal-risk models adjust for differences in the criterion-on-test regression lines of different groups. The constant ratio, conditional probability and equal probability models manipulate the test cutoff scores of different groups so that certain ratios formed between different selection outcomes (correct acceptances, correct rejections, incorrect acceptances, incorrect rejections are the same for such groups. The decision-theoretic approach requires that utilities be attached to these different outcomes for different groups. These procedures are not only eminently suited to accommodate calls for affirmative action, but they also serve the cause of transparency. Opsomming Hierdie artikel verskaf 'n oorsig van verskeie modelle om billike keuring te verkry in situasies waar onderverteen-woordigende groepe geneig is om swakker op keuringstoetse te vaar. Terwyl voorspellingsydigheid 'n statistiese begrip is wat betrekking het op stelselmatige foute in die voorspelling van individue se kriteriumtellings, het keuringsbillikheid te make met die mate waarin keuringsresultate aan sekere sosiaal-politieke vereistes voldoen. Die regressieen gelyke-risiko-modelle maak aanpassings vir verskille in die kriterium-op-toetsregressielyne van verskillende groepe. Die konstante-verhoudings, voorwaardelike-waarskynlikheids- en gelyke-waarskynlikheidsmodelle manipuleer die toetsafkappunte van verskillende groepe sodat sekere verhoudings wat tussen keuringsresultate (korrekte aanvaardings, verkeerde aanvaardings, korrekte verwerpings

  8. A systems evaluation model for selecting spent nuclear fuel storage concepts

    International Nuclear Information System (INIS)

    Postula, F.D.; Finch, W.C.; Morissette, R.P.

    1982-01-01

    This paper describes a system evaluation approach used to identify and evaluate monitored, retrievable fuel storage concepts that fulfill ten key criteria for meeting the functional requirements and system objectives of the National Nuclear Waste Management Program. The selection criteria include health and safety, schedules, costs, socio-economic factors and environmental factors. The methodology used to establish the selection criteria, develop a weight of importance for each criterion and assess the relative merit of each storage system is discussed. The impact of cost relative to technical criteria is examined along with experience in obtaining relative merit data and its application in the model. Topics considered include spent fuel storage requirements, functional requirements, preliminary screening, and Monitored Retrievable Storage (MRS) system evaluation. It is concluded that the proposed system evaluation model is universally applicable when many concepts in various stages of design and cost development need to be evaluated

  9. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  10. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Factors influencing creep model equation selection

    International Nuclear Information System (INIS)

    Holdsworth, S.R.; Askins, M.; Baker, A.; Gariboldi, E.; Holmstroem, S.; Klenk, A.; Ringel, M.; Merckling, G.; Sandstrom, R.; Schwienheer, M.; Spigarelli, S.

    2008-01-01

    During the course of the EU-funded Advanced-Creep Thematic Network, ECCC-WG1 reviewed the applicability and effectiveness of a range of model equations to represent the accumulation of creep strain in various engineering alloys. In addition to considering the experience of network members, the ability of several models to describe the deformation characteristics of large single and multi-cast collations of ε(t,T,σ) creep curves have been evaluated in an intensive assessment inter-comparison activity involving three steels, 21/4 CrMo (P22), 9CrMoVNb (Steel-91) and 18Cr13NiMo (Type-316). The choice of the most appropriate creep model equation for a given application depends not only on the high-temperature deformation characteristics of the material under consideration, but also on the characteristics of the dataset, the number of casts for which creep curves are available and on the strain regime for which an analytical representation is required. The paper focuses on the factors which can influence creep model selection and model-fitting approach for multi-source, multi-cast datasets

  12. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  13. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    Science.gov (United States)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  14. Failure Criterion for Brick Masonry: A Micro-Mechanics Approach

    Directory of Open Access Journals (Sweden)

    Kawa Marek

    2015-02-01

    Full Text Available The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading conditions at different orientations of bed joints. The performance of the approach is verified against solutions obtained for corresponding layered and block microstructures, which provides the upper and lower strength bounds for masonry microstructure, respectively. Subsequently, a phenomenological anisotropic strength criterion for masonry microstructure is proposed. The criterion has a form of conjunction of Jaeger critical plane condition and Tsai-Wu criterion. The model proposed is identified based on the fitting of numerical results obtained from the microstructural analysis. Identified criterion is then verified against results obtained for different loading orientations. It appears that strength of masonry microstructure can be satisfactorily described by the criterion proposed.

  15. Numerical and Experimental Validation of a New Damage Initiation Criterion

    Science.gov (United States)

    Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.

    2017-09-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.

  16. Criterion for selection of quasi-equilibrium and quasi-frozen flows of chemically reacting mixture N2O4 reversible 2NO2reversible 2NO+O2 with heat isobaric supply (removal)

    International Nuclear Information System (INIS)

    Kostik, G.Eh.; Shiryaeva, N.M.

    1979-01-01

    Is suggested the criterion of quasi-equilibrium and quasi-frozen flows with isobaric heat supply (removal), including the basic external factors, which affect on the kinetics of chemical process. This criterion is the complex [g/Fq], where g is the coolant rate, F is the channel cross-section, q is the heat flow. Estimated formulae for quasi-equilibrium [g/Fq]sub(e) and quasi-frozen [g/Fq]sub(f) flows are obtained. The states of deviation from equilibrium and frozen conditions in linear region are considered, are listed graphical dependences lg[g/Fq]sub(e), lg[g/Fq]sub(el), lg[g/Fq]sub(f), lg[g/Eq]sub(fl), as functions of equilibrium parameter Tsub(e), pressure and frozen coordinate of epsilonsub(2f) reaction. This graphs give the possibility to estimate rapidly and obviously the flow character of chemically reacting coolant

  17. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  18. Halo models of HI selected galaxies

    Science.gov (United States)

    Paul, Niladri; Choudhury, Tirthankar Roy; Paranjape, Aseem

    2018-06-01

    Modelling the distribution of neutral hydrogen (HI) in dark matter halos is important for studying galaxy evolution in the cosmological context. We use a novel approach to infer the HI-dark matter connection at the massive end (m_H{I} > 10^{9.8} M_{⊙}) from radio HI emission surveys, using optical properties of low-redshift galaxies as an intermediary. In particular, we use a previously calibrated optical HOD describing the luminosity- and colour-dependent clustering of SDSS galaxies and describe the HI content using a statistical scaling relation between the optical properties and HI mass. This allows us to compute the abundance and clustering properties of HI-selected galaxies and compare with data from the ALFALFA survey. We apply an MCMC-based statistical analysis to constrain the free parameters related to the scaling relation. The resulting best-fit scaling relation identifies massive HI galaxies primarily with optically faint blue centrals, consistent with expectations from galaxy formation models. We compare the Hi-stellar mass relation predicted by our model with independent observations from matched Hi-optical galaxy samples, finding reasonable agreement. As a further application, we make some preliminary forecasts for future observations of HI and optical galaxies in the expected overlap volume of SKA and Euclid/LSST.

  19. Selecting a model of supersymmetry breaking mediation

    International Nuclear Information System (INIS)

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-01-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  20. Selective Oxidation of Lignin Model Compounds.

    Science.gov (United States)

    Gao, Ruili; Li, Yanding; Kim, Hoon; Mobley, Justin K; Ralph, John

    2018-05-02

    Lignin, the planet's most abundant renewable source of aromatic compounds, is difficult to degrade efficiently to welldefined aromatics. We developed a microwave-assisted catalytic Swern oxidation system using an easily prepared catalyst, MoO 2 Cl 2 (DMSO) 2 , and DMSO as the solvent and oxidant. It demonstrated high efficiency in transforming lignin model compounds containing the units and functional groups found in native lignins. The aromatic ring substituents strongly influenced the selectivity of β-ether phenolic dimer cleavage to generate sinapaldehyde and coniferaldehyde, monomers not usually produced by oxidative methods. Time-course studies on two key intermediates provided insight into the reaction pathway. Owing to the broad scope of this oxidation system and the insight gleaned with regard to its mechanism, this strategy could be adapted and applied in a general sense to the production of useful aromatic chemicals from phenolics and lignin. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Decision criterion dynamics in animals performing an auditory detection task.

    Directory of Open Access Journals (Sweden)

    Robert W Mill

    Full Text Available Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding "yes" during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans and in other psychological domains (e.g., vision, memory.

  2. Towards chaos criterion in quantum field theory

    OpenAIRE

    Kuvshinov, V. I.; Kuzmin, A. V.

    2002-01-01

    Chaos criterion for quantum field theory is proposed. Its correspondence with classical chaos criterion in semi-classical regime is shown. It is demonstrated for real scalar field that proposed chaos criterion can be used to investigate stability of classical solutions of field equations.

  3. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  4. A multipole acceptability criterion for electronic structure theory

    International Nuclear Information System (INIS)

    Schwegler, E.; Challacombe, M.; Head-Gordon, M.

    1998-01-01

    Accurate and computationally inexpensive estimates of multipole expansion errors are crucial to the success of several fast electronic structure methods. In this paper, a new nonempirical multipole acceptability criterion is described that is directly applicable to expansions of high order moments. Several model calculations typical of electronic structure theory are presented to demonstrate its performance. For cases involving small translation distances, accuracies are increased by up to five orders of magnitude over an empirical criterion. The new multipole acceptance criterion is on average within an order of magnitude of the exact expansion error. Use of the multipole acceptance criterion in hierarchical multipole based methods as well as in traditional electronic structure methods is discussed. copyright 1998 American Institute of Physics

  5. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  6. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  7. Multi-Criterion Two-Sided Matching of Public–Private Partnership Infrastructure Projects: Criteria and Methods

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-04-01

    Full Text Available Two kinds of evaluative criteria are associated with Public–Private Partnership (PPP infrastructure projects, i.e., private evaluative criteria and public evaluative criteria. These evaluative criteria are inversely related, that is, the higher the public benefits; the lower the private surplus. To balance evaluative criteria in the Two-Sided Matching (TSM decision, this paper develops a quantitative matching decision model to select an optimal matching scheme for PPP infrastructure projects based on the Hesitant Fuzzy Set (HFS under unknown evaluative criterion weights. In the model, HFS is introduced to describe values of the evaluative criteria and multi-criterion information is fully considered given by groups. The optimal model is built and solved by maximizing the whole deviation of each criterion so that the evaluative criterion weights are determined objectively. Then, the match-degree of the two sides is calculated and a multi-objective optimization model is introduced to select an optimal matching scheme via a min-max approach. The results provide new insights and implications of the influence on evaluative criteria in the TSM decision.

  8. Extensions and applications of the Bohm criterion

    Science.gov (United States)

    Baalrud, Scott D.; Scheiner, Brett; Yee, Benjamin; Hopkins, Matthew; Barnat, Edward

    2015-04-01

    The generalized Bohm criterion is revisited in the context of incorporating kinetic effects of the electron and ion distribution functions into the theory. The underlying assumptions and results of two different approaches are compared: the conventional ‘kinetic Bohm criterion’ and a fluid-moment hierarchy approach. The former is based on the asymptotic limit of an infinitely thin sheath (λD/l = 0), whereas the latter is based on a perturbative expansion of a sheath that is thin compared to the plasma (λD/l ≪ 1). Here λD is the Debye length, which characterizes the sheath length scale, and l is a measure of the plasma or presheath length scale. The consequences of these assumptions are discussed in terms of how they restrict the class of distribution functions to which the resulting criteria can be applied. Two examples are considered to provide concrete comparisons between the two approaches. The first is a Tonks-Langmuir model including a warm ion source (Robertson 2009 Phys. Plasmas 16 103503). This highlights a substantial difference between the conventional kinetic theory, which predicts slow ions dominate at the sheath edge, and the fluid moment approach, which predicts slow ions have little influence. The second example considers planar electrostatic probes biased near the plasma potential using model equations and particle-in-cell simulations. This demonstrates a situation where electron kinetic effects alter the Bohm criterion, leading to a subsonic ion flow at the sheath edge.

  9. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  10. A new Russell model for selecting suppliers

    NARCIS (Netherlands)

    Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza

    2014-01-01

    Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to

  11. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  12. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  13. Model selection in Bayesian segmentation of multiple DNA alignments.

    Science.gov (United States)

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  14. ADDED VALUE AS EFFICIENCY CRITERION FOR INDUSTRIAL PRODUCTION PROCESS

    Directory of Open Access Journals (Sweden)

    L. M. Korotkevich

    2016-01-01

    Full Text Available Literary analysis has shown that the majority of researchers are using classical efficiency criteria for construction of an optimization model for production process: profit maximization; cost minimization; maximization of commercial product output; minimization of back-log for product demand; minimization of total time consumption due to production change. The paper proposes to use an index of added value as an efficiency criterion because it combines economic and social interests of all main interested subjects of the business activity: national government, property owners, employees, investors. The following types of added value have been considered in the paper: joint-stock, market, monetary, economic, notional (gross, net, real. The paper makes suggestion to use an index of real value added as an efficiency criterion. Such approach permits to bring notional added value in comparable variant because added value can be increased not only due to efficiency improvement of enterprise activity but also due to environmental factors – excess in rate of export price increases over rate of import growth. An analysis of methods for calculation of real value added has been made on a country-by-country basis (extrapolation, simple and double deflation. A method of double deflation has been selected on the basis of the executed analysis and it is counted according to the Laspeyires, Paasche, Fischer indices. A conclusion has been made that the used expressions do not take into account fully economic peculiarities of the Republic of Belarus: they are considered as inappropriate in the case when product cost is differentiated according to marketing outlets; they do not take account of difference in rate of several currencies and such approach is reflected in export price of a released product and import price for raw material, supplies and component parts. Taking this into consideration expressions for calculation of real value added have been specified

  15. The Bohm criterion for rf discharges

    International Nuclear Information System (INIS)

    Meijer, P.M.; Goedheer, W.J.

    1991-01-01

    The well-known dc Bohm criterion is extended to rf discharges. Both low- (ω rf much-lt ω pi ) and high-(ω pi much-lt ω rf ) frequency regimes are considered. For low frequencies, the dc Bohm criterion holds. This criterion states that the initial energy of the ions entering the sheath must exceed a limit in order to obtain a stable sheath. For high frequencies, a modified limit is derived, which is somewhat lower than that of the dc Bohm criterion. The resulting ion current density in a high-frequency sheath is only a few percent lower than that for the dc case

  16. Extended equal areas criterion: foundations and applications

    Energy Technology Data Exchange (ETDEWEB)

    Yusheng, Xue [Nanjim Automation Research Institute, Nanjim (China)

    1994-12-31

    The extended equal area criterion (EEAC) provides analytical expressions for ultra fast transient stability assessment, flexible sensitivity analysis, and means to preventive and emergency controls. Its outstanding performances have been demonstrated by thousands upon thousands simulations on more than 50 real power systems and by on-line operation records in an EMS environment of Northeast China Power System since September 1992. However, the researchers have mainly based on heuristics and simulations. This paper lays a theoretical foundation of EEAC and brings to light the mechanism of transient stability. It proves true that the dynamic EEAC furnishes a necessary and sufficient condition for stability of multi machine systems with any detailed models, in the sense of the integration accuracy. This establishes a new platform for further advancing EEAC and better understanding of problems. An overview of EEAC applications in China is also given in this paper. (author) 30 refs.

  17. How to define the tool kit for the corrective maintenance service? : a tool kit definition model under the service performance criterion

    NARCIS (Netherlands)

    Chen, Denise

    2009-01-01

    Currently, the rule of defining tool kits is varied and more engineer's aspects oriented. However, the decision of the tool kit's definition is a trade-off problem between the cost and the service performance. This project is designed to develop a model that can integrate the engineer's preferences

  18. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  19. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  20. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-01-01

    proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a

  1. Prediction of Hot Tearing Using a Dimensionless Niyama Criterion

    Science.gov (United States)

    Monroe, Charles; Beckermann, Christoph

    2014-08-01

    The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.

  2. P2-17: Individual Differences in Dynamic Criterion Shifts during Perceptual Decision Making

    Directory of Open Access Journals (Sweden)

    Issac Rhim

    2012-10-01

    Full Text Available Perceptual decision-making involves placing an optimal criterion on the axis of encoded sensory evidence to maximize outcomes for choices. Optimal criterion setting becomes critical particularly when neural representations of sensory inputs are noisy and feedbacks for perceptual choices vary over time in an unpredictable manner. Here we monitored time courses of decision criteria that are adopted by human subjects while abruptly shifting the criterion of stochastic feedback to perceptual choices with certain amounts in an unpredictable direction and at an unpredictable point of time. Subjects viewed a brief (0.3 s, thin (.07 deg annulus around the fixation and were forced to judge whether the annulus was smaller or larger than an unknown boundary. We estimated moment-to-moment criteria by fitting a cumulative Gaussian function to the data within a sliding window of trials that are locked to a shift in feedback criterion. Unpredictable shifts in feedback criterion successfully induced shifts in actual decision criterion towards an optimal criterion for many of subjects, but with time delay and amount of shifts varying across individual subjects. There were disproportionately more numbers of overshooters (reaching and then surpassing the optimal criterion required than undershooters (subpar reach, with a significant anti-correlation with sensory sensitivity. To find a mechanism that generates these individual differences, we developed a dynamic criterion learning model by modifying a reinforcement learning model, which assumes that a criterion is adjusted every trial by a weighted discrepancy between actual and expected rewards.

  3. Selection of Hydrological Model for Waterborne Release

    International Nuclear Information System (INIS)

    Blanchard, A.

    1999-01-01

    This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the Design Basis and Beyond Design Basis accidents to be used in the future study

  4. Development of failure criterion for Kevlar-epoxy fabric laminates

    Science.gov (United States)

    Tennyson, R. C.; Elliott, W. G.

    1984-01-01

    The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.

  5. SOCIO-PSYCHOLOGICAL CRITERIONS OF FAMILY LIFESTYLE TYPOLOGY

    Directory of Open Access Journals (Sweden)

    Yekaterina Anatolievna Yumkina

    2015-02-01

    Full Text Available The purpose of this article is to present socio-psychological criterions of family lifestyle typology, which were found during theoretical modelling and empirical research work. It is important in fundamental and practical aspects. St-Petersburg students (n = 116, from 19 to 21 years old were examined by special questionnaire «Family relationship and home» (Kunitsi-na V.N., Yumkina Ye.A., 2012 which measures different aspects of family lifestyle. We also used complex of methods that gave us information about personal values, self-rating and parent-child relationships. Dates were divided into six groups according to three main criterions of family lifestyle typology: social environment of family life, family activity, and family interpersonal relationships. There were found statistically significant differences between pairs of group from every criterions. The results can be useful in spheres dealing with family crisis, family development, family traditions etc.

  6. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  7. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  8. A model for selecting leadership styles.

    Science.gov (United States)

    Perkins, V J

    1992-01-01

    Occupational therapists lead a variety of groups during their professional activities. Such groups include therapy groups, treatment teams and management meetings. Therefore it is important for each therapist to understand theories of leadership and be able to select the most effective style for him or herself in specific situations. This paper presents a review of leadership theory and research as well as therapeutic groups. It then integrates these areas to assist students and new therapists in identifying a style that is effective for a particular group.

  9. Numerical and Experimental Validation of a New Damage Initiation Criterion

    NARCIS (Netherlands)

    Sadhinoch, M.; Atzema, E.H.; Perdahcioglu, E.S.; Van Den Boogaard, A.H.

    2017-01-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this

  10. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  11. An ethical criterion for geoscientists

    Science.gov (United States)

    Peppoloni, Silvia

    2013-04-01

    Anthropological researches have demonstrated that at some point in human history, man makes an evolutive jump in cultural sense: at first, he is able to perceive himself only as part of a community, later he becomes able to perceive himself as an individual. The analysis of the linguistic roots of the word "Ethics" discloses the traces of this evolutive transition and an original double meaning: on the one hand, "Ethics" contains a sense of belonging to the social fabric, on the other hand, it is related to the individual sphere. These two existential conditions (social and individual) unexpectedly co-exist in the word "Ethics". So, "Geo-Ethics" can be defined as the investigation and reflection on those values upon which to base appropriate behaviours and practices regarding the Geosphere (social dimension), but also as the analysis of the relationships between the geoscientist who acts and his own actions (individual dimension). Therefore, the meaning of the word "Geo-Ethics" calls upon geoscientists to face the responsibility of an ethical behaviour. What does this responsibility consist of and what motivations are necessary to push geoscientists to practice the Earth sciences in an ethical way? An ethical commitment exists if there is research of truth. In their activities, Geoscientists should be searchers and defenders of truth. If geoscientists give up this role, they completely empty of meaning their work. Ethical obligations arise from the possession of specific knowledge that has practical consequences. Geoscientists, as active and responsible part of society, have to serve society and the common good. The ethical criterion for a geoscientist should be rooted in his individual sphere, that is the source of any action even in the social sphere, and should have the intellectual honesty as main requirement. It includes: • respect for the truth that they look for and for other's ideas; • recognition of the value of others as valuable for themselves;

  12. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  13. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  14. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  15. Selection of Hydrological Model for Waterborne Release

    International Nuclear Information System (INIS)

    Blanchard, A.

    1999-01-01

    Following a request from the States of South Carolina and Georgia, downstream radiological consequences from postulated accidental aqueous releases at the three Savannah River Site nonreactor nuclear facilities will be examined. This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the accidents to be used in the future study

  16. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  17. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    T. van Ommen (Thijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  18. The qualitative criterion of transient angle stability

    DEFF Research Database (Denmark)

    Lyu, R.; Xue, Y.; Xue, F.

    2015-01-01

    In almost all the literatures, the qualitative assessment of transient angle stability extracts the angle information of generators based on the swing curve. As the angle (or angle difference) of concern and the threshold value rely strongly on the engineering experience, the validity and robust...... of these criterions are weak. Based on the stability mechanism from the extended equal area criterion (EEAC) theory and combining with abundant simulations of real system, this paper analyzes the criterions in most literatures and finds that the results could be too conservative or too optimistic. It is concluded...

  19. A work criterion for plastic collapse

    International Nuclear Information System (INIS)

    Muscat, Martin; Mackenzie, Donald; Hamilton, Robert

    2003-01-01

    A new criterion for evaluating limit and plastic loads in pressure vessel design by analysis is presented. The proposed criterion is based on the plastic work dissipated in the structure as loading progresses and may be used for structures subject to a single load or a combination of multiple loads. Example analyses show that limit and plastic loads given by the plastic work criterion are robust and consistent. The limit and plastic loads are determined purely by the inelastic response of the structure and are not influenced by the initial elastic response: a problem with some established plastic criteria

  20. A convenient accuracy criterion for time domain FE-calculations

    DEFF Research Database (Denmark)

    Jensen, Morten Skaarup

    1997-01-01

    An accuracy criterion that is well suited to tome domain finite element (FE) calculations is presented. It is then used to develop a method for selecting time steps and element meshes that produce accurate results without significantly overburderning the computer. Use of this method is illustrated...... with a simple example, where comparison with an analytical solution shows that results are sufficiently accurate, which is not always the case with more primitive mthods for determining the discretisation....

  1. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  2. Modeling shape selection of buckled dielectric elastomers

    Science.gov (United States)

    Langham, Jacob; Bense, Hadrien; Barkley, Dwight

    2018-02-01

    A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.

  3. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  4. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  5. Orientation selection of equiaxed dendritic growth by three-dimensional cellular automaton model

    Energy Technology Data Exchange (ETDEWEB)

    Wei Lei [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China); Lin Xin, E-mail: xlin@nwpu.edu.cn [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China); Wang Meng; Huang Weidong [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China)

    2012-07-01

    A three-dimensional (3-D) adaptive mesh refinement (AMR) cellular automata (CA) model is developed to simulate the equiaxed dendritic growth of pure substance. In order to reduce the mesh induced anisotropy by CA capture rules, a limited neighbor solid fraction (LNSF) method is presented. It is shown that the LNSF method reduced the mesh induced anisotropy based on the simulated morphologies for isotropic interface free energy. An expansion description using two interface free energy anisotropy parameters ({epsilon}{sub 1}, {epsilon}{sub 2}) is used in the present 3-D CA model. It is illustrated by present 3-D CA model that the positive {epsilon}{sub 1} favors the dendritic growth with the Left-Pointing-Angle-Bracket 100 Right-Pointing-Angle-Bracket preferred directions, and negative {epsilon}{sub 2} favors dendritic growth with the Left-Pointing-Angle-Bracket 110 Right-Pointing-Angle-Bracket preferred directions, which has a good agreement with the prediction of the spherical plot of the inverse of the interfacial stiffness. The dendritic growths with the orientation selection between Left-Pointing-Angle-Bracket 100 Right-Pointing-Angle-Bracket and Left-Pointing-Angle-Bracket 110 Right-Pointing-Angle-Bracket are also discussed using the different {epsilon}{sub 1} with {epsilon}{sub 2}=-0.02. It is found that the simulated morphologies by present CA model are as expected from the minimum stiffness criterion.

  6. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  7. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  8. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  9. The stressor criterion for posttraumatic stress disorder: Does it matter?

    Science.gov (United States)

    Roberts, Andrea L.; Dohrenwend, Bruce P.; Aiello, Allison; Wright, Rosalind J.; Maercker, Andreas; Galea, Sandro; Koenen, Karestan C.

    2013-01-01

    Objective The definition of the stressor criterion for posttraumatic stress disorder (“Criterion A1”) is hotly debated with major revisions being considered for DSM-V. We examine whether symptoms, course, and consequences of PTSD vary predictably with the type of stressful event that precipitates symptoms. Method We used data from the 2009 PTSD diagnostic subsample (N=3,013) of the Nurses Health Study II. We asked respondents about exposure to stressful events qualifying under 1) DSM-III, 2) DSM-IV, or 3) not qualifying under DSM Criterion A1. Respondents selected the event they considered worst and reported subsequent PTSD symptoms. Among participants who met all other DSM-IV PTSD criteria, we compared distress, symptom severity, duration, impairment, receipt of professional help, and nine physical, behavioral, and psychiatric sequelae (e.g. physical functioning, unemployment, depression) by precipitating event group. Various assessment tools were used to determine fulfillment of PTSD Criteria B through F and to assess these 14 outcomes. Results Participants with PTSD from DSM-III events reported on average 1 more symptom (DSM-III mean=11.8 symptoms, DSM-IV=10.7, non-DSM=10.9) and more often reported symptoms lasted one year or longer compared to participants with PTSD from other groups. However, sequelae of PTSD did not vary systematically with precipitating event type. Conclusions Results indicate the stressor criterion as defined by the DSM may not be informative in characterizing PTSD symptoms and sequelae. In the context of ongoing DSM-V revision, these results suggest that Criterion A1 could be expanded in DSM-V without much consequence for our understanding of PTSD phenomenology. Events not considered qualifying stressors under the DSM produced PTSD as consequential as PTSD following DSM-III events, suggesting PTSD may be an aberrantly severe but nonspecific stress response syndrome. PMID:22401487

  10. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  11. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  12. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  13. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  14. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  15. Response to selection in finite locus models with nonadditive effects

    NARCIS (Netherlands)

    Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jørn Rind; Bijma, Piter; Sørensen, Anders Christian

    2017-01-01

    Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive

  16. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  17. A complete generalized adjustment criterion

    NARCIS (Netherlands)

    Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.

    2015-01-01

    Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent

  18. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  19. COPS model estimates of LLEA availability near selected reactor sites

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1979-11-01

    The COPS computer model has been used to estimate local law enforcement agency (LLEA) officer availability in the neighborhood of selected nuclear reactor sites. The results of these analyses are presented both in graphic and tabular form in this report

  20. Molecular modelling of a chemodosimeter for the selective detection ...

    Indian Academy of Sciences (India)

    Wintec

    Molecular modelling of a chemodosimeter for the selective detection of. As(III) ion in water. † ... high levels of arsenic cause severe skin diseases in- cluding skin cancer ..... Special Attention to Groundwater in SE Asia (eds) D. Chakraborti, A ...

  1. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  2. Sensor Calibration Design Based on D-Optimality Criterion

    Directory of Open Access Journals (Sweden)

    Hajiyev Chingiz

    2016-09-01

    Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.

  3. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  4. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  5. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  6. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  7. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  8. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  9. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  10. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  11. Postglacial recolonization at a snail's pace (Trochulus villosus): confronting competing refugia hypotheses using model selection.

    Science.gov (United States)

    Dépraz, A; Cordellier, M; Hausser, J; Pfenninger, M

    2008-05-01

    The localization of Last Glacial Maximum (LGM) refugia is crucial information to understand a species' history and predict its reaction to future climate changes. However, many phylogeographical studies often lack sampling designs intensive enough to precisely localize these refugia. The hairy land snail Trochulus villosus has a small range centred on Switzerland, which could be intensively covered by sampling 455 individuals from 52 populations. Based on mitochondrial DNA sequences (COI and 16S), we identified two divergent lineages with distinct geographical distributions. Bayesian skyline plots suggested that both lineages expanded at the end of the LGM. To find where the origin populations were located, we applied the principles of ancestral character reconstruction and identified a candidate refugium for each mtDNA lineage: the French Jura and Central Switzerland, both ice-free during the LGM. Additional refugia, however, could not be excluded, as suggested by the microsatellite analysis of a population subset. Modelling the LGM niche of T. villosus, we showed that suitable climatic conditions were expected in the inferred refugia, but potentially also in the nunataks of the alpine ice shield. In a model selection approach, we compared several alternative recolonization scenarios by estimating the Akaike information criterion for their respective maximum-likelihood migration rates. The 'two refugia' scenario received by far the best support given the distribution of genetic diversity in T. villosus populations. Provided that fine-scale sampling designs and various analytical approaches are combined, it is possible to refine our necessary understanding of species responses to environmental changes.

  12. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  13. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  14. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  15. The EMU debt criterion: an interpretation

    Directory of Open Access Journals (Sweden)

    R. BERNDSEN

    1997-12-01

    Full Text Available The convergence criteria specified in the Maastricht Treaty on government deficit and debt, inflation, the exchange rate and the long-term interest rate will play an important, if not decisive, role in determining which countries move on to the third stage of the Economic and Monetary Union (EMU. The aim of this work is to provide a possible interpretation of the EMU debt criterion. The author investigates the government debt criterion which, as Article 104c(2b of the Treaty shows, has a considerable scope for interpretation. Although this subject has been discussed extensively, relatively little work has been done to develop a clear interpretation of the EMU debt criterion. A flexible approach is adopted in which parts of the relevant Treaty text are characterised using two parameters.

  16. FFTBM and primary pressure acceptance criterion

    International Nuclear Information System (INIS)

    Prosek, A.

    2004-01-01

    When thermalhydraulic computer codes are used for simulation in the area of nuclear engineering the question is how to conduct an objective comparison between the code calculation and measured data. To answer this the fast Fourier transform based method (FFTBM) was developed. When the FFTBM method was developed the acceptance criteria for primary pressure and total accuracy were set. In the recent study the FFTBM method was used for accuracy quantification of RD-14M large LOCA test B9401 calculations. The blind accuracy analysis indicated good total accuracy while the primary pressure criterion was not fulfilled. The objective of the study was therefore to investigate the reasons for not fulfilling the primary pressure acceptance criterion and the applicability of the criterion to experimental facilities simulating heavy water reactor. The results of the open quantitative analysis showed that sensitivity analysis for influence parameters provide sufficient information to judge in which calculation the accuracy of primary pressure is acceptable. (author)

  17. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  18. Statistical criterion for Bubbly-slug flow transition

    Energy Technology Data Exchange (ETDEWEB)

    Zigler, J; Elias, E [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    The investigation of flow pattern transitions is still an interesting problem in the research of multiphase Row. It has been studied theoretically, and experimental confirmation of the models has been found by many investigators. The present paper deals with a statistical approach to bubbly-slug transitions in a vertical upward two phase flow and a new transition criterion is deduced from experimental data (authors).

  19. General stability criterion for inviscid parallel flow

    International Nuclear Information System (INIS)

    Sun Liang

    2007-01-01

    Arnol'd's second stability theorem is approached from an elementary point of view. First, a sufficient criterion for stability is found analytically as either -μ 1 s ) s ) in the flow, where U s is the velocity at the inflection point, and μ 1 is the eigenvalue of Poincare's problem. Second, this criterion is generalized to barotropic geophysical flows in the β plane. And the connections between present criteria and Arnol'd's nonlinear criteria are also discussed. The proofs are completely elementary and so could be used to teach undergraduate students

  20. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  1. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  2. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  3. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  4. ERP Software Selection Model using Analytic Network Process

    OpenAIRE

    Lesmana , Andre Surya; Astanti, Ririn Diar; Ai, The Jin

    2014-01-01

    During the implementation of Enterprise Resource Planning (ERP) in any company, one of the most important issues is the selection of ERP software that can satisfy the needs and objectives of the company. This issue is crucial since it may affect the duration of ERP implementation and the costs incurred for the ERP implementation. This research tries to construct a model of the selection of ERP software that are beneficial to the company in order to carry out the selection of the right ERP sof...

  5. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  6. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  7. Model Selection in Historical Research Using Approximate Bayesian Computation

    Science.gov (United States)

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  8. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  9. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  10. Uncertain programming models for portfolio selection with uncertain returns

    Science.gov (United States)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  11. A simple stability criterion for CANDU figure-of-eight flow oscillations

    International Nuclear Information System (INIS)

    Gulshani, P.; Spinks, N.J.

    1983-01-01

    Potential flow oscillations in CANDU reactor primary heat transport system are analyzed in terms of a simple, linearized model. A simple, algebraic stability criterion is obtained. The model predictions are found to be in good agreement with those of thermohydraulic codes for high pressure natural circulation conditions. For normal operating conditions the criterion predicts the correct trend but overlooks important stabilizing effects. The model clarifies the instability mechanism; namely the response of enthalpy and, hence, pressure in the boiling region to flow change

  12. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  13. A pellet-clad interaction failure criterion

    International Nuclear Information System (INIS)

    Howl, D.A.; Coucill, D.N.; Marechal, A.J.C.

    1983-01-01

    A Pellet-Clad Interaction (PCI) failure criterion, enabling the number of fuel rod failures in a reactor core to be determined for a variety of normal and fault conditions, is required for safety analysis. The criterion currently being used for the safety analysis of the Pressurized Water Reactor planned for Sizewell in the UK is defined and justified in this paper. The criterion is based upon a threshold clad stress which diminishes with increasing fast neutron dose. This concept is consistent with the mechanism of clad failure being stress corrosion cracking (SCC); providing excess corrodant is always present, the dominant parameter determining the propagation of SCC defects is stress. In applying the criterion, the SLEUTH-SEER 77 fuel performance computer code is used to calculate the peak clad stress, allowing for concentrations due to pellet hourglassing and the effect of radial cracks in the fuel. The method has been validated by analysis of PCI failures in various in-reactor experiments, particularly in the well-characterised power ramp tests in the Steam Generating Heavy Water Reactor (SGHWR) at Winfrith. It is also in accord with out-of-reactor tests with iodine and irradiated Zircaloy clad, such as those carried out at Kjeller in Norway. (author)

  14. The Leadership Criterion in Technological Institute

    International Nuclear Information System (INIS)

    Carvalho, Marcelo Souza de; Cussa, Adriana Lourenco d'Avila; Suita, Julio Cezar

    2005-01-01

    This paper introduces the Direction's 'Decision Making Practice'. It has recently been reviewed with the merging of the beddings of the Leadership Criterion (CE-PNQ). These changes improved the control of institutional plans of action which are the result of the global performance critical analysis and other information associated with the Decision Making Practice. (author)

  15. Mercier criterion for high-β tokamaks

    International Nuclear Information System (INIS)

    Galvao, R.M.O.

    1984-01-01

    An expression, for the application of the Mercier criterion to numerical studies of diffuse high-β tokamaks (β approximatelly Σ,q approximatelly 1), which contains only leading order contributions in the high-β tokamak approximation is derived. (L.C.) [pt

  16. Information criterion for the categorization quality evaluation

    Directory of Open Access Journals (Sweden)

    Michail V. Svirkin

    2011-05-01

    Full Text Available The paper considers the possibility of using the variation of information function as a quality criterion for categorizing a collection of documents. The performance of the variation of information function is being examined subject to the number of categories and the sample volume of the test document collection.

  17. On the upper bound in the Bohm sheath criterion

    Energy Technology Data Exchange (ETDEWEB)

    Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su [Russian Academy of Sciences, Budker Institute of Nuclear Physics, Siberian Branch (Russian Federation)

    2016-02-15

    The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ion source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.

  18. Fixation probability in a two-locus intersexual selection model.

    Science.gov (United States)

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  20. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  1. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  2. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  5. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  6. Psychometric aspects of item mapping for criterion-referenced interpretation and bookmark standard setting.

    Science.gov (United States)

    Huynh, Huynh

    2010-01-01

    Locating an item on an achievement continuum (item mapping) is well-established in technical work for educational/psychological assessment. Applications of item mapping may be found in criterion-referenced (CR) testing (or scale anchoring, Beaton and Allen, 1992; Huynh, 1994, 1998a, 2000a, 2000b, 2006), computer-assisted testing, test form assembly, and in standard setting methods based on ordered test booklets. These methods include the bookmark standard setting originally used for the CTB/TerraNova tests (Lewis, Mitzel, Green, and Patz, 1999), the item descriptor process (Ferrara, Perie, and Johnson, 2002) and a similar process described by Wang (2003) for multiple-choice licensure and certification examinations. While item response theory (IRT) models such as the Rasch and two-parameter logistic (2PL) models traditionally place a binary item at its location, Huynh has argued in the cited papers that such mapping may not be appropriate in selecting items for CR interpretation and scale anchoring.

  7. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum...

  8. Model selection for contingency tables with algebraic statistics

    NARCIS (Netherlands)

    Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.

    2009-01-01

    Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of

  9. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  10. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  11. Electron accelerators for radiation processing: Criterions of selection and exploitation

    International Nuclear Information System (INIS)

    Zimek, Zbigniew

    2001-01-01

    The progress in accelerator technology is tightly attached to the continuously advanced development in many branches of technical activity. Although the present level of accelerators development can satisfy most of the commercial requirements, this field continues to expand and improve quality by offering efficient, cheap, reliable, high average beam power commercial units. Accelerator construction must be a compromised between size, efficiency and cost with respect to the field of its application. High power accelerators have been developed to meet specific demands of flue gas treatment and other high throughput to increase the capacity of the progress and reduced unit cost of operation. Automatic control, reliability and reduced maintenance, adequate adoption to process conditions, suitable electron energy and beam power are the basic features of modern accelerator construction. Accelerators have the potential to serve as industrial radiation sources and eventually may replace the isotope sources in future. Electron beam plants can transfer much higher amounts of energy into the irradiated objects than other types of facilities including gamma plants. This provides the opportunity to construct technological lines with high capacity that are more technically and economically suitable with high throughputs, short evidence time and grate versatility

  12. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  13. multivariate time series modeling of selected childhood diseases

    African Journals Online (AJOL)

    2016-06-17

    Jun 17, 2016 ... KEYWORDS: Multivariate Approach, Pre-whitening, Vector Time Series, .... Alternatively, the process may be written in mean adjusted form as .... The AIC criterion asymptotically over estimates the order with positive probability, whereas the BIC and HQC criteria ... has the same asymptotic distribution as Ǫ.

  14. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  15. Fisher-Wright model with deterministic seed bank and selection.

    Science.gov (United States)

    Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel

    2017-04-01

    Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Relative criterion for validity of a semiclassical approach to the dynamics near quantum critical points.

    Science.gov (United States)

    Wang, Qian; Qin, Pinquan; Wang, Wen-ge

    2015-10-01

    Based on an analysis of Feynman's path integral formulation of the propagator, a relative criterion is proposed for validity of a semiclassical approach to the dynamics near critical points in a class of systems undergoing quantum phase transitions. It is given by an effective Planck constant, in the relative sense that a smaller effective Planck constant implies better performance of the semiclassical approach. Numerical tests of this relative criterion are given in the XY model and in the Dicke model.

  17. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  18. A model for the sustainable selection of building envelope assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Huedo, Patricia, E-mail: huedo@uji.es [Universitat Jaume I (Spain); Mulet, Elena, E-mail: emulet@uji.es [Universitat Jaume I (Spain); López-Mesa, Belinda, E-mail: belinda@unizar.es [Universidad de Zaragoza (Spain)

    2016-02-15

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  19. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  20. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  1. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  2. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  3. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  4. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  5. Geometric steering criterion for two-qubit states

    Science.gov (United States)

    Yu, Bai-Chu; Jia, Zhih-Ahn; Wu, Yu-Chun; Guo, Guang-Can

    2018-01-01

    According to the geometric characterization of measurement assemblages and local hidden state (LHS) models, we propose a steering criterion which is both necessary and sufficient for two-qubit states under arbitrary measurement sets. A quantity is introduced to describe the required local resources to reconstruct a measurement assemblage for two-qubit states. We show that the quantity can be regarded as a quantification of steerability and be used to find out optimal LHS models. Finally we propose a method to generate unsteerable states, and construct some two-qubit states which are entangled but unsteerable under all projective measurements.

  6. Stability Criterion for a Finned Spinning Projectile

    OpenAIRE

    S. D. Naik

    2000-01-01

    The state-of-the-art in gun projectile technology has been used for the aerodynamic stabilisation.This approach is acceptable for guided and controlled rockets but the free-flight rockets suffer fromunacceptable dispersion. Sabot projectiles with both spin and fms developed during the last decadeneed careful analysis. In this study, the second method of Liapunov has been used to develop stability criterion for a projectile to be designed with small fins and is made to spin in the flight. This...

  7. Blind equalization with criterion with memory nonlinearity

    Science.gov (United States)

    Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.

    1992-06-01

    Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.

  8. A New Multiaxial High-Cycle Fatigue Criterion Based on the Critical Plane for Ductile and Brittle Materials

    Science.gov (United States)

    Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei

    2015-02-01

    An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.

  9. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    Science.gov (United States)

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori

  10. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  11. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  12. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  13. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1998-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  14. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1999-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  15. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  16. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  17. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  18. Modeling selective pressures on phytoplankton in the global ocean.

    Directory of Open Access Journals (Sweden)

    Jason G Bragg

    Full Text Available Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying

  19. Modeling selective pressures on phytoplankton in the global ocean.

    Science.gov (United States)

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-03-10

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  20. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  1. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A Model of Social Selection and Successful Altruism

    Science.gov (United States)

    1989-10-07

    D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the

  3. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  4. Establishment of selected acute pulmonary thromboembolism model in experimental sheep

    International Nuclear Information System (INIS)

    Fan Jihai; Gu Xiulian; Chao Shengwu; Zhang Peng; Fan Ruilin; Wang Li'na; Wang Lulu; Wang Ling; Li Bo; Chen Taotao

    2010-01-01

    Objective: To establish a selected acute pulmonary thromboembolism model in experimental sheep suitable for animal experiment. Methods: By using Seldinger's technique the catheter sheath was placed in both the femoral vein and femoral artery in ten sheep. Under C-arm DSA guidance the catheter was inserted through the catheter sheath into the pulmonary artery. Via the catheter appropriate amount of sheep autologous blood clots was injected into the selected pulmonary arteries. The selected acute pulmonary thromboembolism model was thus established. Pulmonary angiography was performed to check the results. The pulmonary arterial pressure, femoral artery pressure,heart rates and partial pressure of oxygen in arterial blood (PaO 2 ) were determined both before and after the treatment. The above parameters obtained after the procedure were compared with the recorded parameters measured before the procedure, and the sheep model quality was evaluated. Results: The baseline of pulmonary arterial pressure was (27.30 ± 9.58) mmHg,femoral artery pressure was (126.4 ± 13.72) mmHg, heart rate was (103 ± 15) bpm and PaO 2 was (87.7 ± 12.04) mmHg. Sixty minutes after the injection of (30 ± 5) ml thrombotic agglomerates, the pulmonary arterial pressures rose to (52 ± 49) mmHg, femoral artery pressures dropped to (100 ± 21) mmHg. The heart rates went up to (150 ± 26) bpm. The PaO 2 fell to (25.3 ± 11.2) mmHg. After the procedure the above parameters were significantly different from that measured before the procedure in all ten animals (P < 0.01). The pulmonary arteriography clearly demonstrated that the selected pulmonary arteries were successfully embolized. Conclusion: The anatomy of sheep's femoral veins,vena cava system, pulmonary artery and right heart system are suitable for the establishment of the catheter passage, for this reason, selected acute pulmonary thromboembolism model can be easily created in experimental sheep. The technique is feasible and the model

  5. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  6. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  7. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  8. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  9. DATA ANALYSIS BY FORMAL METHODS OF ESTIMATION OF INDEXES OF RATING CRITERION IN PROCESS OF ACCUMULATION OF DATA ABOUT WORKING OF THE TEACHING STAFF

    Directory of Open Access Journals (Sweden)

    Alexey E. Fedoseev

    2014-01-01

    Full Text Available The article considers the development of formal methods of assessing the rating criterion exponents. The article deals with the mathematical model, which allows to connect together quantitative rating criterion characteristics, measured in various scales, with intuitive idea of them. The solution to the problem of rating criterion estimation is proposed.

  10. Physical and Constructive (Limiting) Criterions of Gear Wheels Wear

    Science.gov (United States)

    Fedorov, S. V.

    2018-01-01

    We suggest using a generalized model of friction - the model of elastic-plastic deformation of the body element, which is located on the surface of the friction pairs. This model is based on our new engineering approach to the problem of friction-triboergodynamics. Friction is examined as transformative and dissipative process. Structural-energetic interpretation of friction as a process of elasto-plastic deformation and fracture contact volumes is proposed. The model of Hertzian (heavy-loaded) friction contact evolution is considered. The least wear particle principle is formulated. It is mechanical (nano) quantum. Mechanical quantum represents the least structural form of solid material body in conditions of friction. It is dynamic oscillator of dissipative friction structure and it can be examined as the elementary nanostructure of metal’s solid body. At friction in state of most complete evolution of elementary tribosystem (tribocontact) all mechanical quanta (subtribosystems) with the exception of one, elasticity and reversibly transform energy of outer impact (mechanic movement). In these terms only one mechanical quantum is the lost - standard of wear. From this position we can consider the physical criterion of wear and the constructive (limiting) criterion of gear teeth and other practical examples of tribosystems efficiency with new tribology notion - mechanical (nano) quantum.

  11. A Criterion for Stability of Synchronization and Application to Coupled Chua's Systems

    International Nuclear Information System (INIS)

    Wang Haixia; Lu Qishao; Wang Qingyun

    2009-01-01

    We investigate synchronization in an array network of nearest-neighbor coupled chaotic oscillators. By using of the Lyapunov stability theory and matrix theory, a criterion for stability of complete synchronization is deduced. Meanwhile, an estimate of the critical coupling strength is obtained to ensure achieving chaos synchronization. As an example application, a model of coupled Chua's circuits with linearly bidirectional coupling is studied to verify the validity of the criterion. (general)

  12. Nucleation of recrystallization in fine-grained materials: an extension of the Bailey-Hirsch criterion

    Science.gov (United States)

    Favre, Julien; Fabrègue, Damien; Chiba, Akihiko; Bréchet, Yves

    2013-11-01

    A new criterion for nucleation in the case of dynamic recrystallization is proposed in order to include the contribution of the grain boundary energy stored in the microstructure in the energy balance. Due to the nucleation events, the total surface area of pre-existing grain boundaries decreases, leading to a nucleus size smaller than expected by conventional nucleation criteria. The new model provides a better prediction of the nucleus size during recrystallization of pure copper compared with the conventional nucleation criterion.

  13. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  14. The alternative DSM-5 personality disorder traits criterion

    DEFF Research Database (Denmark)

    Bach, Bo; Maples-Keller, Jessica L; Bo, Sune

    2016-01-01

    The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013a) offers an alternative model for Personality Disorders (PDs) in Section III, which consists in part of a pathological personality traits criterion measured...... with the Personality Inventory for DSM-5 (PID-5). The PID-5 selfreport instrument currently exists in the original 220-item form, a short 100-item form, and a brief 25-item form. For clinicians and researchers, the choice of a particular PID- 5 form depends on feasibility, but also reliability and validity. The goal...

  15. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  16. Wind scatterometry with improved ambiguity selection and rain modeling

    Science.gov (United States)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous

  17. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  18. Novel global robust stability criterion for neural networks with delay

    International Nuclear Information System (INIS)

    Singh, Vimal

    2009-01-01

    A novel criterion for the global robust stability of Hopfield-type interval neural networks with delay is presented. An example illustrating the improvement of the present criterion over several recently reported criteria is given.

  19. Reactor instrumentation. Definition of the single failure criterion

    International Nuclear Information System (INIS)

    1980-12-01

    The standard defines the single failure criterion which is used in other IEC publications on reactor safety systems. The purpose of the single failure criterion is the assurance of minimum redundancy. (orig./HP) [de

  20. Two novel synchronization criterions for a unified chaotic system

    International Nuclear Information System (INIS)

    Tao Chaohai; Xiong Hongxia; Hu Feng

    2006-01-01

    Two novel synchronization criterions are proposed in this paper. It includes drive-response synchronization and adaptive synchronization schemes. Moreover, these synchronization criterions can be applied to a large class of chaotic systems and are very useful for secure communication

  1. Systems interaction and single failure criterion

    International Nuclear Information System (INIS)

    1981-01-01

    This report documents the results of a six-month study to evaluate the ongoing research programs of the U.S. Nuclear Regulatory Commission (NRC) and U.S. commercial nuclear station owners which address the safety significance of systems interaction and the regulatory adequacy of the single failure criterion. The evaluation of system interactions provided is the initial phase of a more detailed study leading to the development and application of methodology for quantifying the relative safety of operating nuclear plants. (Auth.)

  2. The Goiania accident: release from hospital criterion

    International Nuclear Information System (INIS)

    Falcao, R.C.; Hunt, J.

    1990-01-01

    On the thirteenth of September 1987, a 1357 Ci Cesium source was removed from the 'Instituto de Radiologia de Goiania' - probably two or three days later the source was opened, causing the internal and external contamination of 247 people, and part of the city of Goiania. This paper describes the release from hospital criterion of the contaminated patients, based on radiation protection principles which were developed for this case. The estimate of the biological half-life for cesium is also described. (author) [pt

  3. Early Stop Criterion from the Bootstrap Ensemble

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan; Fog, Torben L.

    1997-01-01

    This paper addresses the problem of generalization error estimation in neural networks. A new early stop criterion based on a Bootstrap estimate of the generalization error is suggested. The estimate does not require the network to be trained to the minimum of the cost function, as required...... by other methods based on asymptotic theory. Moreover, in contrast to methods based on cross-validation which require data left out for testing, and thus biasing the estimate, the Bootstrap technique does not have this disadvantage. The potential of the suggested technique is demonstrated on various time...

  4. Multilevel selection in a resource-based model

    Science.gov (United States)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  5. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  6. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  7. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    Science.gov (United States)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  8. A proposed risk acceptance criterion for nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Mehta, K.

    1985-06-01

    The need to establish a radiological protection criterion that applies specifically to disposal of high level nuclear fuel wastes arises from the difficulty of applying the present ICRP recommendations. These recommendations apply to situations in which radiological detriment can be actively controlled, while a permanent waste disposal facility is meant to operate without the need for corrective actions. Also, the risks associated with waste disposal depend on events and processes that have various probabilities of occurrence. In these circumstances, it is not suitable to apply standards that are based on a single dose limit as in the present ICRP recommendations, because it will generally be possible to envisage events, perhaps rare, that would lead to doses above any selected limit. To overcome these difficulties, it is proposed to base a criterion for acceptability on a set of dose values and corresponding limiting values of probabilities; this set of values constitutes a risk-limit line. A risk-limit line suitable for waste disposal is proposed that has characteristics consistent with the basic philosophy of the ICRP and UNSCEAR recommendations, and is based on levels on natural background radiation

  9. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  10. Development of Solar Drying Model for Selected Cambodian Fish Species

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R 2), chi-square (χ 2) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  11. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  12. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  13. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  14. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  15. Use of the Niyama criterion to predict porosity of the mushy zone with deformation

    Directory of Open Access Journals (Sweden)

    S. Polyakov

    2011-10-01

    Full Text Available The article presents new results on the use of the Niyama criterion to estimate porosity appearance in castings under hindered shrinkage. The effect of deformation of the mushy zone on filtration is shown. A new form of the Niyama criterion accounting for the hindered shrinkage and the range of deformation localization has been obtained. The results of this study are illustrated by the examp le of the Niyama criterion calculated for Al-Cu alloys under different diffusion conditions of solidification and rate of deformation in the mushy zone. Derived equations can be used in a mathematical model of the casting solidification as well as for interpretation of the simulation results of casting solidification under hindered shrinkage. The presented study resulted in a new procedure of using the Niyama criterion under mushy zone deformation.

  16. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  17. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  18. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    log K ow. These findings were validated with experimental results and by a comparison to the properties of antimalarial drugs in clinical use. For ten active compounds, nine were predicted to accumulate to a greater extent in lysosomes than in other organelles, six of these were in the optimum range...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  19. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    Science.gov (United States)

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  20. Novel criterion for formation of metastable phase from undercooled melt

    International Nuclear Information System (INIS)

    Kuribayashi, Kazuhiko; Nagashio, Kosuke; Niwata, Kenji; Kumar, M.S. Vijaya; Hibiya, Taketoshi

    2007-01-01

    Undercooling a melt facilitates the preferential nucleation of a metastable phase. In the present study, the formation of metastable phases from undercooled melts was considered from the viewpoint of the competitive nucleation criterion. The classical nucleation theory shows us that the most critical factor for forming a critical nucleus is the interface free energy σ. Furthermore, Spaepen's negentropic model on σ generated the role of the scaling factor α that depends on the polyhedral order in the liquid and solid phases prominent in simple liquids such as the melt of monoatomic metals. In ionic materials such as oxides, however, in which oxygen polyhedrons including a cation at their center are the structural units both in the solid and liquid phases, the entropy of fusion, rather than α, can be expected to become dominant in the determination of σ. In accordance with this idea, using REFeO 3 as the model material (where RE denotes rare-earth elements) the entropy-undercooling regime criterion was proposed and verified

  1. A Neuronal Network Model for Pitch Selectivity and Representation.

    Science.gov (United States)

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  2. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  3. Fuel-pin cladding transient failure strain criterion

    International Nuclear Information System (INIS)

    Bard, F.E.; Duncan, D.R.; Hunter, C.W.

    1983-01-01

    A criterion for cladding failure based on accumulated strain was developed for mixed uranium-plutonium oxide fuel pins and used to interpret the calculated strain results from failed transient fuel pin experiments conducted in the Transient Reactor Test (TREAT) facility. The new STRAIN criterion replaced a stress-based criterion that depends on the DORN parameter and that incorrectly predicted fuel pin failure for transient tested fuel pins. This paper describes the STRAIN criterion and compares its prediction with those of the stress-based criterion

  4. Acute leukemia classification by ensemble particle swarm model selection.

    Science.gov (United States)

    Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Jesús A; Gómez-Gil, Pilar; Altamirano, Leopoldo; Reyes, Carlos A; Reta, Carolina; Rosales, Alejandro

    2012-07-01

    Acute leukemia is a malignant disease that affects a large proportion of the world population. Different types and subtypes of acute leukemia require different treatments. In order to assign the correct treatment, a physician must identify the leukemia type or subtype. Advanced and precise methods are available for identifying leukemia types, but they are very expensive and not available in most hospitals in developing countries. Thus, alternative methods have been proposed. An option explored in this paper is based on the morphological properties of bone marrow images, where features are extracted from medical images and standard machine learning techniques are used to build leukemia type classifiers. This paper studies the use of ensemble particle swarm model selection (EPSMS), which is an automated tool for the selection of classification models, in the context of acute leukemia classification. EPSMS is the application of particle swarm optimization to the exploration of the search space of ensembles that can be formed by heterogeneous classification models in a machine learning toolbox. EPSMS does not require prior domain knowledge and it is able to select highly accurate classification models without user intervention. Furthermore, specific models can be used for different classification tasks. We report experimental results for acute leukemia classification with real data and show that EPSMS outperformed the best results obtained using manually designed classifiers with the same data. The highest performance using EPSMS was of 97.68% for two-type classification problems and of 94.21% for more than two types problems. To the best of our knowledge, these are the best results reported for this data set. Compared with previous studies, these improvements were consistent among different type/subtype classification tasks, different features extracted from images, and different feature extraction regions. The performance improvements were statistically significant

  5. Radial Domany-Kinzel models with mutation and selection

    Science.gov (United States)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  6. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  7. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  8. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  9. Cliff-edge model of obstetric selection in humans.

    Science.gov (United States)

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  10. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  11. Application of the single failure criterion

    International Nuclear Information System (INIS)

    1990-01-01

    In order to present further details on the application and interpretation and on the limitations of individual concepts in the NUSS Codes and Safety Guides, a series of Safety Practice publications have been initiated. It is hoped that many Member States will be able to benefit from the experience presented in these books. The present publication will be useful not only to regulators but also to designers and could be particularly helpful in the interpretation of cases which fall on the borderline between the two areas. It should assist in clarifying, by way of examples, many of the concepts and implementation methods. It also describes some of the limitations involved. The book addresses a specialized topic and it is recommended that it be used together with the other books in the Safety Series. During the development of this publication the actual practices of all countries with major reactor programmes has been taken into account. An interpretation of the relevant text of the Design Code is given in the light of these national practices. The criterion is put into perspective with the general reliability requirements in which it is also embedded in the Design Code. Its relation to common cause and other multiple failure cases and also to the temporary disengagement of components in systems important to safety is clarified. Its use and its limitations are thus explained in the context of reliability targets for systems performance. The guidance provided applies to all reactor systems and would be applicable even to systems not in nuclear power plants. But since this publication was developed to give an interpretation of a specific requirement of the Design Code, the broader applicability is not explicitly claimed. The Design Code lists three cases for which compliance with the criterion may not be justified. The present publication assists in the more precise and practical identification of those cases. 9 figs, 1 tab

  12. Addressing selected problems of the modelling of digital control systems

    International Nuclear Information System (INIS)

    Sedlak, J.

    2004-12-01

    The introduction of digital systems to practical activities at nuclear power plants brings about new requirements for their modelling for the purposes of reliability analyses required for plant licensing as well as for inclusion into PSA studies and subsequent use in applications for the assessment of events, limits and conditions, and risk monitoring. It is very important to assess, both qualitatively and quantitatively, the effect of this change on operational safety. The report describes selected specific features of reliability analysis of digital system and recommends methodological procedures. The chapters of the report are as follows: (1) Flexibility and multifunctionality of the system. (2) General framework of reliability analyses (Understanding the system; Qualitative analysis; Quantitative analysis; Assessment of results, comparison against criteria; Documenting system reliability analyses; Asking for comments and their evaluation); and (3) Suitable reliability models (Reliability models of basic events; Monitored components with repair immediately following defect or failure; Periodically tested components; Constant unavailability (probability of failure to demand); Application of reliability models for electronic components; Example of failure rate decomposition; Example modified for diagnosis successfulness; Transfer of reliability analyses to PSA; Common cause failures - CCF; Software backup and CCF type failures, software versus hardware). (P.A.)

  13. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  14. Relationships between Classroom Schedule Types and Performance on the Algebra I Criterion-Referenced Test

    Science.gov (United States)

    Murray, Gregory V.; Moyer-Packenham, Patricia S.

    2014-01-01

    One option for length of individual mathematics class periods is the schedule type selected for Algebra I classes. This study examined the relationship between student achievement, as indicated by Algebra I Criterion-Referenced Test scores, and the schedule type for Algebra I classes. Data obtained from the Utah State Office of Education included…

  15. Zero mass field quantization and Kibble's long-range force criterion for the Goldstone theorem

    International Nuclear Information System (INIS)

    Wright, S.H.

    1981-01-01

    The central theme of the dissertation is an investigation of the long-range force criterion used by Kibble in his discussion of the Goldstone Theorem. This investigation is broken up into the following sections: I. Introduction. Spontaneous symmetry breaking, the Goldstone Theorem and the conditions under which it holds are discussed. II. Massless Wave Expansions. In order to make explicit calculations of the operator commutators used in applying Kibble's criterion, it is necessary to work out the operator expansions for a massless field. Unusual results are obtained which include operators corresponding to classical macroscopic field modes. III. The Kibble Criterion for Simple Models Exhibiting Spontaneously Broken Symmetries. The results of the previous section are applied to simple models with spontaneously broken symmetries, namely, the real scalar massless field and the Goldstone model without gauge coupling. IV. The Higgs Mechanism in Classical Field Theory. It is shown that the Higgs Mechanism has a simple interpretation in terms of classical field theory, namely, that it arises from a derivative coupling term between the Goldstone fields and the gauge fields. V. The Higgs Mechanism and Kibble's Criterion. This section draws together the material discussed in sections II to IV. Explicit calculations are made to evaluate Kibble's criterion on a Goldstone-Higgs type of model in the Coulomb gauge. It is found, as expected, that the criterion is not met, but not for reasons relating to the range of the mediating force. By referring to the findings of sections III and IV, it is concluded that the common denominator underlying both the Higgs Mechanism and the failure of Kibble's criterion is a structural aspect of the field equations: derivative coupling between fields

  16. Impact of selected troposphere models on Precise Point Positioning convergence

    Science.gov (United States)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  17. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  19. Multiaxial fatigue criterion based on parameters from torsion and axial S-N curve

    Directory of Open Access Journals (Sweden)

    M. Margetin

    2016-07-01

    Full Text Available Multiaxial high cycle fatigue is a topic that concerns nearly all industrial domains. In recent years, a great deal of recommendations how to address problems with multiaxial fatigue life time estimation have been made and a huge progress in the field has been achieved. Until now, however, no universal criterion for multiaxial fatigue has been proposed. Addressing this situation, this paper offers a design of a new multiaxial criterion for high cycle fatigue. This criterion is based on critical plane search. Damage parameter consists of a combination of normal and shear stresses on a critical plane (which is a plane with maximal shear stress amplitude. Material parameters used in proposed criterion are obtained from torsion and axial S-N curves. Proposed criterion correctly calculates life time for boundary loading condition (pure torsion and pure axial loading. Application of proposed model is demonstrated on biaxial loading and the results are verified with testing program using specimens made from S355 steel. Fatigue material parameters for proposed criterion and multiple sets of data for different combination of axial and torsional loading have been obtained during the experiment.

  20. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  1. Modeling heat stress effect on Holstein cows under hot and dry conditions: selection tools.

    Science.gov (United States)

    Carabaño, M J; Bachagha, K; Ramón, M; Díaz, C

    2014-12-01

    component, a constant term that is not affected by temperature, representing from 64% of the variation for SCS to 91% of the variation for milk. The second component, showing a flat pattern at intermediate temperatures and increasing or decreasing slopes for the extremes, gathered 15, 11, and 24% of the variation for fat and protein yield and SCS, respectively. This component could be further evaluated as a selection criterion for heat tolerance independently of the production level. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. The limits of the Bohm criterion in collisional plasmas

    International Nuclear Information System (INIS)

    Valentini, H.-B.; Kaiser, D.

    2015-01-01

    The sheath formation within a low-pressure collisional plasma is analysed by means of a two-fluid model. The Bohm criterion takes into account the effects of the electric field and the inertia of the ions. Numerical results yield that these effects contribute to the space charge formation, only, if the collisionality is lower than a relatively small threshold. It follows that a lower and an upper limit of the drift speed of the ions exist where the effects treated by Bohm can form a sheath. This interval becomes narrower as the collisionality increases and vanishes at the mentioned threshold. Above the threshold, the sheath is mainly created by collisions and the ionisation. Under these conditions, the sheath formation cannot be described by means of Bohm like criteria. In a few references, a so-called upper limit of the Bohm criterion is stated for collisional plasmas where the momentum equation of the ions is taken into account, only. However, the present paper shows that this limit results in an unrealistically steep increase of the space charge density towards the wall, and, therefore, it yields no useful limit of the Bohm velocity

  3. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  4. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  5. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  6. A Model for Selection of Eyespots on Butterfly Wings.

    Science.gov (United States)

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in

  7. A Model for Selection of Eyespots on Butterfly Wings.

    Directory of Open Access Journals (Sweden)

    Toshio Sekimura

    Full Text Available The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins. A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not.We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions

  8. Geoscience Education and Public Outreach AND CRITERION 2: MAKING A BROADER IMPACT

    Science.gov (United States)

    Marlino, M.; Scotchmoor, J. G.

    2005-12-01

    The geosciences influence our daily lives and yet often go unnoticed by the general public. From the moment we listen to the weather report and fill-up our cars for the daily commute, until we return to our homes constructed from natural resources, we rely on years of scientific research. The challenge facing the geosciences is to make explicit to the public not only the criticality of the research whose benefits they enjoy, but also to actively engage them as partners in the research effort, by providing them with sufficient understanding of the scientific enterprise so that they become thoughtful and proactive when making decisions in the polling booth. Today, there is broad recognition within the science and policy community that communication needs to be more effective, more visible, and that the public communication of the scientific enterprise is critical not only to its taxpayer support, but also to maintenance of a skilled workforce and the standard of living expected by many Americans. In 1997, the National Science Board took the first critical step in creating a cultural change in the scientific community by requiring explicit consideration of the broader impacts of research in every research proposal. The so-called Criterion 2 has catalyzed a dramatic shift in expectations within the geoscience community and an incentive for finding ways to encourage the science research community to select education and public outreach as a venue for responding to Criterion 2. In response, a workshop organized by the University of California Museum of Paleontology and the Digital Library for Earth System Education (DLESE) was held on the Berkeley campus May 11-13, 2005. The Geoscience EPO Workshop purposefully narrowed its focus to that of education and public outreach. This workshop was based on the premise that there are proven models and best practices for effective outreach strategies that need to be identified and shared with research scientists. Workshop

  9. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  10. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we impl...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed.......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  11. Discussion on verification criterion and method of human factors engineering for nuclear power plant controller

    International Nuclear Information System (INIS)

    Yang Hualong; Liu Yanzi; Jia Ming; Huang Weijun

    2014-01-01

    In order to prevent or reduce human error and ensure the safe operation of nuclear power plants, control device should be verified from the perspective of human factors engineering (HFE). The domestic and international human factors engineering guidelines about nuclear power plant controller were considered, the verification criterion and method of human factors engineering for nuclear power plant controller were discussed and the application examples were provided for reference in this paper. The results show that the appropriate verification criterion and method should be selected to ensure the objectivity and accuracy of the conclusion. (authors)

  12. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  13. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  14. Quantitative genetic models of sexual selection by male choice.

    Science.gov (United States)

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  15. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  16. Selection Ideal Coal Suppliers of Thermal Power Plants Using the Matter-Element Extension Model with Integrated Empowerment Method for Sustainability

    Directory of Open Access Journals (Sweden)

    Zhongfu Tan

    2014-01-01

    Full Text Available In order to reduce thermal power generation cost and improve its market competitiveness, considering fuel quality, cost, creditworthiness, and sustainable development capacity factors, this paper established the evaluation system for coal supplier selection of thermal power and put forward the coal supplier selection strategies for thermal power based on integrated empowering and ideal matter-element extension models. On the one hand, the integrated empowering model can overcome the limitations of subjective and objective methods to determine weights, better balance subjective, and objective information. On the other hand, since the evaluation results of the traditional element extension model may fall into the same class and only get part of the order results, in order to overcome this shortcoming, the idealistic matter-element extension model is constructed. It selects the ideal positive and negative matter-elements classical field and uses the closeness degree to replace traditional maximum degree of membership criterion and calculates the positive or negative distance between the matter-element to be evaluated and the ideal matter-element; then it can get the full order results of the evaluation schemes. Simulated and compared with the TOPSIS method, Romania selection method, and PROMETHEE method, numerical example results show that the method put forward by this paper is effective and reliable.

  17. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  18. Developing a Green Supplier Selection Model by Using the DANP with VIKOR

    Directory of Open Access Journals (Sweden)

    Tsai Chi Kuo

    2015-02-01

    Full Text Available This study proposes a novel hybrid multiple-criteria decision-making (MCDM method to evaluate green suppliers in an electronics company. Seventeen criteria in two dimensions concerning environmental and management systems were identified under the Code of Conduct of the Electronic Industry Citizenship Coalition (EICC. Following this, the Decision-Making Trial and Evaluation Laboratory (DEMATEL used the Analytic Network Process (ANP method (known as DANP to determine both the importance of evaluation criteria in selecting suppliers and the causal relationships between them. Finally, the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR method was used to evaluate the environmental performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of an electronics company was presented to demonstrate how to select green suppliers.

  19. Within-host selection of drug resistance in a mouse model reveals dose-dependent selection of atovaquone resistance mutations

    NARCIS (Netherlands)

    Nuralitha, Suci; Murdiyarso, Lydia S.; Siregar, Josephine E.; Syafruddin, Din; Roelands, Jessica; Verhoef, Jan; Hoepelman, Andy I.M.; Marzuki, Sangkot

    2017-01-01

    The evolutionary selection of malaria parasites within an individual host plays a critical role in the emergence of drug resistance. We have compared the selection of atovaquone resistance mutants in mouse models reflecting two different causes of failure of malaria treatment, an inadequate

  20. CHAIN-WISE GENERALIZATION OF ROAD NETWORKS USING MODEL SELECTION

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2017-05-01

    Full Text Available Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  1. A computational neural model of goal-directed utterance selection.

    Science.gov (United States)

    Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji

    2010-06-01

    It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. A criterion for heated pipe design by linear electric resistances

    International Nuclear Information System (INIS)

    Bloch, M.; Cruz, J.R.B.

    1984-01-01

    A criterion for linear eletrical elements instalation on horizontal tubes is obtainned in this work. This criterion is based upon the calculation of the thermal stresses caused by the non uniform temperature distribution in the tube cross section. The finite difference method and the SAP IV computer code are both used in the calculations. The criterion is applied to the thermal circuits of the IEN which has tube diameter varying from φ 1/2 in till φ 8 in. (author) [pt

  3. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  4. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  5. A Path-Independent Forming Limit Criterion for Stamping Simulations

    International Nuclear Information System (INIS)

    Zhu Xinhai; Chappuis, Laurent; Xia, Z. Cedric

    2005-01-01

    Forming Limit Diagram (FLD) has been proved to be a powerful tool for assessing necking failures in sheet metal forming analysis for majority of stamping operations over the last three decades. However, experimental evidence and theoretical analysis suggest that its applications are limited to linear or almost linear strain paths during its deformation history. Abrupt changes or even gradual deviations from linear strain-paths will shift forming limit curves from their original values, a situation that occurs in vast majority of sequential stamping operations such as where the drawing process is followed by flanging and re-strike processes. Various forming limit models have been put forward recently to provide remedies for the problem, noticeably stress-based and strain gradient-based forming limit criteria. This study presents an alternative path-independent forming limit criterion. Instead of traditional Forming Limit Diagrams (FLD) which are constructed in terms of major - minor principal strains throughout deformation history, the new criterion defines a critical effective strain ε-bar* as the limit strain for necking, and it is shown that ε-bar* can be expressed as a function of current strain rate state and material work hardening properties, without the need of explicitly considering strain-path effects. It is given by ε-bar* = f(β, k, n) where β = (dε 2 /dε 1 ) at current deformation state, and k and n are material strain hardening parameters if a power law is assumed. The analysis is built upon previous work by Storen and Rice [1975] and Zhu et al [2002] with the incorporation of anisotropic yield models such as Hill'48 for quadratic orthotropic yield and Hill'79 for non-quadratic orthotropic yield. Effects of anisotropic parameters such as R-values and exponent n-values on necking are investigated in detail for a variety of strain paths. Results predicted according to current analysis are compared against experimental data gathered from literature

  6. A criterion and mechanism for power ramp defects

    International Nuclear Information System (INIS)

    Garlick, A.; Gravenor, J.G.

    1978-02-01

    The problem of power ramp defects in water reactor fuel pins is discussed in relation to results recently obtained from ramp experiments in the Steam Generating Heavy Water Reactor. Cladding cracks in the defected fuel pins were similar, both macro- and micro structurally, to those in unirradiated Zircaloy exposed to iodine stress-corrosion cracking (scc) conditions. Furthermore, when the measured stress levels for scc in short-term tests were taken as a criterion for ramp defects, UK fuel modelling codes were found to give a useful indication of defect probability under reactor service conditions. The likelihood of sticking between fuel and cladding is discussed and evidence presented which suggests that even at power a degree of adhesion may be expected in some fuel pins. The ramp defect mechanism is discussed in terms of fission product scc, initiation being by intergranular penetration and propagation by cleavage when suitably orientated grains are exposed to large dilatational stresses ahead of the main crack. (author)

  7. QV modal distance displacement - a criterion for contingency ranking

    Energy Technology Data Exchange (ETDEWEB)

    Rios, M.A.; Sanchez, J.L.; Zapata, C.J. [Universidad de Los Andes (Colombia). Dept. of Electrical Engineering], Emails: mrios@uniandes.edu.co, josesan@uniandes.edu.co, cjzapata@utp.edu.co

    2009-07-01

    This paper proposes a new methodology using concepts of fast decoupled load flow, modal analysis and ranking of contingencies, where the impact of each contingency is measured hourly taking into account the influence of each contingency over the mathematical model of the system, i.e. the Jacobian Matrix. This method computes the displacement of the reduced Jacobian Matrix eigenvalues used in voltage stability analysis, as a criterion of contingency ranking, considering the fact that the lowest eigenvalue in the normal operation condition is not the same lowest eigenvalue in N-1 contingency condition. It is made using all branches in the system and specific branches according to the IBPF index. The test system used is the IEEE 118 nodes. (author)

  8. Role of optimization criterion in static asymmetric analysis of lumbar spine load.

    Science.gov (United States)

    Daniel, Matej

    2011-10-01

    A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.

  9. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    Science.gov (United States)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  10. The selected models of the mesostructure of composites percolation, clusters, and force fields

    CERN Document Server

    Herega, Alexander

    2018-01-01

    This book presents the role of mesostructure on the properties of composite materials. A complex percolation model is developed for the material structure containing percolation clusters of phases and interior boundaries. Modeling of technological cracks and the percolation in the Sierpinski carpet are described. The interaction of mesoscopic interior boundaries of the material, including the fractal nature of interior boundaries, the oscillatory nature of it interaction and also the stochastic model of the interior boundaries’ interaction, the genesis, structure, and properties are discussed. One of part of the book introduces the percolation model of the long-range effect which is based on the notion on the multifractal clusters with transforming elements, and the theorem on the field interaction of multifractals is described. In addition small clusters, their characteristic properties and the criterion of stability are presented.

  11. Jet pairing algorithm for the 6-jet Higgs channel via energy chi-square criterion

    International Nuclear Information System (INIS)

    Magallanes, J.B.; Arogancia, D.C.; Gooc, H.C.; Vicente, I.C.M.; Bacala, A.M.; Miyamoto, A.; Fujii, K.

    2002-01-01

    Study and discovery of the Higgs bosons at JLC (Joint Linear Collider) is one of the tasks of ACFA (Asian Committee for future Accelerators)-JLC Group. The mode of Higgs production at JLC is e + e - → Z 0 H 0 . In this paper, studies are concentrated on the Higgsstrahlung process and the selection of its signals by getting the right jet-pairing algorithm of 6-jet final state at 300 GeV assuming that Higgs boson mass is 120 GeV and luminosity is 500 fb -1 . The total decay width Γ (H 0 → all) and the efficiency of the signals at the JLC are studied utilizing the 6-jet channel. Out of the 91,500 Higgsstrahlung events, 4,174 6-jet events are selected. PYTHIA Monte Carlo Generator generates the 6-jet Higgsstrahlung channel according to the Standard Model. The generated events are then simulated by Quick Simulator using the JCL parameters. After tagging all 6 quarks which correspond to the 6-jet final state of the Higgsstrahlung, the mean energy of the Z, H, and W's are obtained. Having calculated these information, the event energy chi-square is defined and it is found that the correct combination have generally smaller value. This criterion can be used to find correct jet-pairing algorithm and as one of the cuts for the background signals later on. Other chi-definitions are also proposed. (S. Funahashi)

  12. Heat transfer modelling and stability analysis of selective laser melting

    International Nuclear Information System (INIS)

    Gusarov, A.V.; Yadroitsev, I.; Bertrand, Ph.; Smurov, I.

    2007-01-01

    The process of direct manufacturing by selective laser melting basically consists of laser beam scanning over a thin powder layer deposited on a dense substrate. Complete remelting of the powder in the scanned zone and its good adhesion to the substrate ensure obtaining functional parts with improved mechanical properties. Experiments with single-line scanning indicate, that an interval of scanning velocities exists where the remelted tracks are uniform. The tracks become broken if the scanning velocity is outside this interval. This is extremely undesirable and referred to as the 'balling' effect. A numerical model of coupled radiation and heat transfer is proposed to analyse the observed instability. The 'balling' effect at high scanning velocities (above ∼20 cm/s for the present conditions) can be explained by the Plateau-Rayleigh capillary instability of the melt pool. Two factors stabilize the process with decreasing the scanning velocity: reducing the length-to-width ratio of the melt pool and increasing the width of its contact with the substrate

  13. Ginzburg criterion for ionic fluids: the effect of Coulomb interactions.

    Science.gov (United States)

    Patsahan, O

    2013-08-01

    The effect of the Coulomb interactions on the crossover between mean-field and Ising critical behavior in ionic fluids is studied using the Ginzburg criterion. We consider the charge-asymmetric primitive model supplemented by short-range attractive interactions in the vicinity of the gas-liquid critical point. The model without Coulomb interactions exhibiting typical Ising critical behavior is used to calibrate the Ginzburg temperature of the systems comprising electrostatic interactions. Using the collective variables method, we derive a microscopic-based effective Hamiltonian for the full model. We obtain explicit expressions for all the relevant Hamiltonian coefficients within the framework of the same approximation, i.e., the one-loop approximation. Then we consistently calculate the reduced Ginzburg temperature t(G) for both the purely Coulombic model (a restricted primitive model) and the purely nonionic model (a hard-sphere square-well model) as well as for the model parameters ranging between these two limiting cases. Contrary to the previous theoretical estimates, we obtain the reduced Ginzburg temperature for the purely Coulombic model to be about 20 times smaller than for the nonionic model. For the full model including both short-range and long-range interactions, we show that t(G) approaches the value found for the purely Coulombic model when the strength of the Coulomb interactions becomes sufficiently large. Our results suggest a key role of Coulomb interactions in the crossover behavior observed experimentally in ionic fluids as well as confirm the Ising-like criticality in the Coulomb-dominated ionic systems.

  14. The Concept of Performance Levels in Criterion-Referenced Assessment.

    Science.gov (United States)

    Hewitson, Mal

    The concept of performance levels in criterion-referenced assessment is explored by applying the idea to different types of tests commonly used in schools, mastery tests (including diagnostic tests) and achievement tests. In mastery tests, a threshold performance standard must be established for each criterion. Attainment of this threshold…

  15. An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species

    International Nuclear Information System (INIS)

    Hatami, M. M.

    2015-01-01

    The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, it reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically

  16. Equilibrium and nonequilibrium attractors for a discrete, selection-migration model

    Science.gov (United States)

    James F. Selgrade; James H. Roberds

    2003-01-01

    This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...

  17. Modified Schur-Cohn Criterion for Stability of Delayed Systems

    Directory of Open Access Journals (Sweden)

    Juan Ignacio Mulero-Martínez

    2015-01-01

    Full Text Available A modified Schur-Cohn criterion for time-delay linear time-invariant systems is derived. The classical Schur-Cohn criterion has two main drawbacks; namely, (i the dimension of the Schur-Cohn matrix generates some round-off errors eventually resulting in a polynomial of s with erroneous coefficients and (ii imaginary roots are very hard to detect when numerical errors creep in. In contrast to the classical Schur-Cohn criterion an alternative approach is proposed in this paper which is based on the application of triangular matrices over a polynomial ring in a similar way as in the Jury test of stability for discrete systems. The advantages of the proposed approach are that it halves the dimension of the polynomial and it only requires seeking real roots, making this modified criterion comparable to the Rekasius substitution criterion.

  18. Digital Materials - Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Science.gov (United States)

    Mańkowski, J.; Lipnicki, J.

    2017-08-01

    The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker's stability criterion was also examined. Fourteen different materials were analyzed.

  19. Digital Materials – Evaluation of the Possibilities of using Selected Hyperelastic Models to Describe Constitutive Relations

    Directory of Open Access Journals (Sweden)

    Mańkowski J.

    2017-08-01

    Full Text Available The authors tried to identify the parameters of numerical models of digital materials, which are a kind of composite resulting from the manufacture of the product in 3D printers. With the arrangement of several heads of the printer, the new material can result from mixing of materials with radically different properties, during the process of producing single layer of the product. The new material has properties dependent on the base materials properties and their proportions. Digital materials tensile characteristics are often non-linear and qualify to be described by hyperelastic materials models. The identification was conducted based on the results of tensile tests models, its various degrees coefficients of the polynomials to various degrees coefficients of the polynomials. The Drucker’s stability criterion was also examined. Fourteen different materials were analyzed.

  20. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  1. A risk-based microbiological criterion that uses the relative risk as the critical limit

    DEFF Research Database (Denmark)

    Andersen, Jens Kirk; Nørrung, Birgit; da Costa Alves Machado, Simone

    2015-01-01

    A risk-based microbiological criterion is described, that is based on the relative risk associated to the analytical result of a number of samples taken from a food lot. The acceptable limit is a specific level of risk and not a specific number of microorganisms, as in other microbiological...... criteria. The approach requires the availability of a quantitative microbiological risk assessment model to get risk estimates for food products from sampled food lots. By relating these food lot risk estimates to the mean risk estimate associated to a representative baseline data set, a relative risk...... estimate can be obtained. This relative risk estimate then can be compared with a critical value, defined by the criterion. This microbiological criterion based on a relative risk limit is particularly useful when quantitative enumeration data are available and when the prevalence of the microorganism...

  2. Sensitive criterion for chirality; Chiral doublet bands in 104Rh59

    International Nuclear Information System (INIS)

    Koike, T.; Starosta, K.; Vaman, C.; Ahn, T.; Fossan, D.B.; Clark, R.M.; Cromaz, M.; Lee, I.Y.; Macchiavelli, A.O.

    2003-01-01

    A particle plus triaxial rotor model was applied to odd-odd nuclei in the A ∼ 130 region in order to study the unique parity πh11/2xνh11/2 rotational bands. With maximum triaxiality assumed and the intermediate axis chosen as the quantization axis for the model calculations, the two lowest energy eigenstates of a given spin have chiral properties. The independence of the quantity S(I) on spin can be used as a new criterion for chirality. In addition, a diminishing staggering amplitude of S(I) with increasing spin implies triaxiality in neighboring odd-A nuclei. Chiral quartet bases were constructed specifically to examine electromagnetic properties for chiral structures. A set of selection rules unique to chirality was derived. Doublet bands built on the πg9/2xνh11/2 configuration have been discovered in odd-odd 104Rh using the 96Zr(11B, 3n) reaction. Based on the discussed criteria for chirality, it is concluded that the doublet bands observed in 104Rh exhibit characteristic chiral properties suggesting a new region of chirality around A ∼110. In addition, magnetic moment measurements have been performed to test the πh11/2xνh11/2 configuration in 128Cs and the πg9/2xνh11/2 configuration in 104Rh

  3. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  4. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  5. An Integrated DEMATEL-QFD Model for Medical Supplier Selection

    OpenAIRE

    Mehtap Dursun; Zeynep Şener

    2014-01-01

    Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment cr...

  6. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  7. Evaluation of uncertainties in selected environmental dispersion models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-01-01

    Compliance with standards of radiation dose to the general public has necessitated the use of dispersion models to predict radionuclide concentrations in the environment due to releases from nuclear facilities. Because these models are only approximations of reality and because of inherent variations in the input parameters used in these models, their predictions are subject to uncertainty. Quantification of this uncertainty is necessary to assess the adequacy of these models for use in determining compliance with protection standards. This paper characterizes the capabilities of several dispersion models to predict accurately pollutant concentrations in environmental media. Three types of models are discussed: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations

  8. Nonparametric adaptive age replacement with a one-cycle criterion

    International Nuclear Information System (INIS)

    Coolen-Schrijner, P.; Coolen, F.P.A.

    2007-01-01

    Age replacement of technical units has received much attention in the reliability literature over the last four decades. Mostly, the failure time distribution for the units is assumed to be known, and minimal costs per unit of time is used as optimality criterion, where renewal reward theory simplifies the mathematics involved but requires the assumption that the same process and replacement strategy continues over a very large ('infinite') period of time. Recently, there has been increasing attention to adaptive strategies for age replacement, taking into account the information from the process. Although renewal reward theory can still be used to provide an intuitively and mathematically attractive optimality criterion, it is more logical to use minimal costs per unit of time over a single cycle as optimality criterion for adaptive age replacement. In this paper, we first show that in the classical age replacement setting, with known failure time distribution with increasing hazard rate, the one-cycle criterion leads to earlier replacement than the renewal reward criterion. Thereafter, we present adaptive age replacement with a one-cycle criterion within the nonparametric predictive inferential framework. We study the performance of this approach via simulations, which are also used for comparisons with the use of the renewal reward criterion within the same statistical framework

  9. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Global economic consequences of selected surgical diseases: a modelling study.

    Science.gov (United States)

    Alkire, Blake C; Shrime, Mark G; Dare, Anna J; Vincent, Jeffrey R; Meara, John G

    2015-04-27

    The surgical burden of disease is substantial, but little is known about the associated economic consequences. We estimate the global macroeconomic impact of the surgical burden of disease due to injury, neoplasm, digestive diseases, and maternal and neonatal disorders from two distinct economic perspectives. We obtained mortality rate estimates for each disease for the years 2000 and 2010 from the Institute of Health Metrics and Evaluation Global Burden of Disease 2010 study, and estimates of the proportion of the burden of the selected diseases that is surgical from a paper by Shrime and colleagues. We first used the value of lost output (VLO) approach, based on the WHO's Projecting the Economic Cost of Ill-Health (EPIC) model, to project annual market economy losses due to these surgical diseases during 2015-30. EPIC attempts to model how disease affects a country's projected labour force and capital stock, which in turn are related to losses in economic output, or gross domestic product (GDP). We then used the value of lost welfare (VLW) approach, which is conceptually based on the value of a statistical life and is inclusive of non-market losses, to estimate the present value of long-run welfare losses resulting from mortality and short-run welfare losses resulting from morbidity incurred during 2010. Sensitivity analyses were performed for both approaches. During 2015-30, the VLO approach projected that surgical conditions would result in losses of 1·25% of potential GDP, or $20·7 trillion (2010 US$, purchasing power parity) in the 128 countries with data available. When expressed as a proportion of potential GDP, annual GDP losses were greatest in low-income and middle-income countries, with up to a 2·5% loss in output by 2030. When total welfare losses are assessed (VLW), the present value of economic losses is estimated to be equivalent to 17% of 2010 GDP, or $14·5 trillion in the 175 countries assessed with this approach. Neoplasm and injury account

  11. Criterion for testing multiparticle negative-partial-transpose entanglement

    International Nuclear Information System (INIS)

    Zeng, B.; Zhou, D.L.; Zhang, P.; Xu, Z.; You, L.

    2003-01-01

    We revisit the criterion of multiparticle entanglement based on the overlaps of a given quantum state ρ with maximally entangled states. For a system of m particles, each with N distinct states, we prove that ρ is m-particle negative partial transpose entangled, if there exists a maximally entangled state vertical bar MES>, such that >1/N. While this sufficiency condition is weaker than the Peres-Horodecki criterion in all cases, it applies to multi-particle systems, and becomes especially useful when the number of particles (m) is large. We also consider the converse of this criterion and illustrate its invalidity with counter examples

  12. Criterion of damage beginning: experimental identification for laminate composite

    International Nuclear Information System (INIS)

    Thiebaud, F.; Perreux, D.; Varchon, D.; Lebras, J.

    1996-01-01

    The aim of this study is to propose a criterion of damage beginning for laminate composite. The materials is a glass-epoxy laminate [+55 deg.,-55 deg.[ n performed by winding filament process. First of all a description of the damage is performed and allows to define a damage variable. Thanks to the potential of free energy, an associated variable is defined. The damage criterion is written by using this last one. The parameter of the criterion is identified using mechanical and acoustical methods. The result is compared and exhibit a good agreement. (authors). 13 refs., 5 figs

  13. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... procedure acceptable to the NRC staff for providing summary details of mathematical modeling methods used in... NUCLEAR REGULATORY COMMISSION [NRC-2013-0062] Reporting Procedure for Mathematical Models Selected... Regulatory Guide (RG) 4.4, ``Reporting Procedure for Mathematical Models Selected to Predict Heated Effluent...

  14. Temperature profile retrieval in axisymmetric combustion plumes using multilayer perceptron modeling and spectral feature selection in the infrared CO2 emission band.

    Science.gov (United States)

    García-Cuesta, Esteban; de Castro, Antonio J; Galván, Inés M; López, Fernando

    2014-01-01

    In this work, a methodology based on the combined use of a multilayer perceptron model fed using selected spectral information is presented to invert the radiative transfer equation (RTE) and to recover the spatial temperature profile inside an axisymmetric flame. The spectral information is provided by the measurement of the infrared CO2 emission band in the 3-5 μm spectral region. A guided spectral feature selection was carried out using a joint criterion of principal component analysis and a priori physical knowledge of the radiative problem. After applying this guided feature selection, a subset of 17 wavenumbers was selected. The proposed methodology was applied over synthetic scenarios. Also, an experimental validation was carried out by measuring the spectral emission of the exhaust hot gas plume in a microjet engine with a Fourier transform-based spectroradiometer. Temperatures retrieved using the proposed methodology were compared with classical thermocouple measurements, showing a good agreement between them. Results obtained using the proposed methodology are very promising and can encourage the use of sensor systems based on the spectral measurement of the CO2 emission band in the 3-5 μm spectral window to monitor combustion processes in a nonintrusive way.

  15. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. The Selection of Turbulence Models for Prediction of Room Airflow

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...

  17. Stem biomass and volume models of selected tropical tree species ...

    African Journals Online (AJOL)

    Stem biomass and stem volume were modelled as a function of diameter (at breast height; Dbh) and stem height (height to the crown base). Logarithmic models are presented that utilise Dbh and height data to predict tree component biomass and stem volumes. Alternative models are given that afford prediction based on ...

  18. Model selection criteria : how to evaluate order restrictions

    NARCIS (Netherlands)

    Kuiper, R.M.

    2012-01-01

    Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might

  19. The Living Dead: Transformative Experiences in Modelling Natural Selection

    Science.gov (United States)

    Petersen, Morten Rask

    2017-01-01

    This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…

  20. Modelling the negative effects of landscape fragmentation on habitat selection

    NARCIS (Netherlands)

    Langevelde, van F.

    2015-01-01

    Landscape fragmentation constrains movement of animals between habitat patches. Fragmentation may, therefore, limit the possibilities to explore and select the best habitat patches, and some animals may have to cope with low-quality patches due to these movement constraints. If so, these individuals

  1. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    the attributes in the database into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside PostgreSQL’s query optimizer. Experimental results indicate an order of magnitude better...

  2. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  3. Importance biasing quality criterion based on contribution response theory

    International Nuclear Information System (INIS)

    Borisov, N.M.; Panin, M.P.

    2001-01-01

    The report proposes a visual criterion of importance biasing both of forward and adjoint simulation. The similarity of contribution Monte Carlo and importance biasing random collision event distribution is proved. The conservation of total number of random trajectory crossings of surfaces, which separate the source and the detector is proposed as importance biasing quality criterion. The use of this criterion is demonstrated on the example of forward vs. adjoint importance biasing in gamma ray deep penetration problem. The larger amount of published data on forward field characteristics than on adjoint leads to the more accurate approximation of adjoint importance function in comparison to forward, for it adjoint importance simulation is more effective than forward. The proposed criterion indicates it visually, showing the most uniform distribution of random trajectory crossing events for the most effective importance biasing parameters and pointing to the direction of tuning importance biasing parameters. (orig.)

  4. Bayesian Information Criterion as an Alternative way of Statistical Inference

    Directory of Open Access Journals (Sweden)

    Nadejda Yu. Gubanova

    2012-05-01

    Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  5. Angular criterion for distinguishing between Fraunhofer and Fresnel diffraction

    International Nuclear Information System (INIS)

    Medina, Francisco F.; Garcia-Sucerquia, Jorge; Castaneda, Roman; Matteucci, Giorgio

    2003-03-01

    The distinction between Fresnel and Fraunhofer diffraction is a crucial condition for the accurate analysis of diffracting structures. In this paper we propose a criterion based on the angle subtended by the first zero of the diffraction pattern from the center of the diffracting aperture. The determination of the zero of the diffraction pattern is the crucial point for assuring the precision of the criterion. It mainly depends on the dynamical range of the detector. Therefore, the applicability of adequate thresholds for different detector types is discussed. The criterion is also generalized by expressing it in terms of the number of Fresnel zones delimited by the aperture. Simulations are reported for illustrating the feasibility of the criterion. (author)

  6. Model selection for integrated pest management with stochasticity.

    Science.gov (United States)

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  8. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  9. Self-Adjointness Criterion for Operators in Fock Spaces

    International Nuclear Information System (INIS)

    Falconi, Marco

    2015-01-01

    In this paper we provide a criterion of essential self-adjointness for operators in the tensor product of a separable Hilbert space and a Fock space. The class of operators we consider may contain a self-adjoint part, a part that preserves the number of Fock space particles and a non-diagonal part that is at most quadratic with respect to the creation and annihilation operators. The hypotheses of the criterion are satisfied in several interesting applications

  10. Selected Constitutive Models for Simulating the Hygromechanical Response of Wood

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    , the boundary conditions are discussed based on discrepancies found for similar research on moisture transport in paper stacks. Paper III: A new sorption hysteresis model suitable for implementation into a numerical method is developed. The prevailing so-called scanning curves are modeled by closed......-form expressions, which only depend on the current relative humidity of the air and current moisture content of the wood. Furthermore, the expressions for the scanning curves are formulated independent of the temperature and species-dependent boundary curves. Paper IV: The sorption hysteresis model developed...... are discussed. The constitutive moisture transport models are coupled with a heat transport model yielding terms that describe the so-called Dufour and Sorret effects, however, with multiple phases and hysteresis included. Paper VII: In this paper the modeling of transverse couplings in creep of wood...

  11. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  12. Selected Aspects of Computer Modeling of Reinforced Concrete Structures

    Directory of Open Access Journals (Sweden)

    Szczecina M.

    2016-03-01

    Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.

  13. Exploratory regression analysis: a tool for selecting models and determining predictor importance.

    Science.gov (United States)

    Braun, Michael T; Oswald, Frederick L

    2011-06-01

    Linear regression analysis is one of the most important tools in a researcher's toolbox for creating and testing predictive models. Although linear regression analysis indicates how strongly a set of predictor variables, taken together, will predict a relevant criterion (i.e., the multiple R), the analysis cannot indicate which predictors are the most important. Although there is no definitive or unambiguous method for establishing predictor variable importance, there are several accepted methods. This article reviews those methods for establishing predictor importance and provides a program (in Excel) for implementing them (available for direct download at http://dl.dropbox.com/u/2480715/ERA.xlsm?dl=1) . The program investigates all 2(p) - 1 submodels and produces several indices of predictor importance. This exploratory approach to linear regression, similar to other exploratory data analysis techniques, has the potential to yield both theoretical and practical benefits.

  14. Bayesian model selection validates a biokinetic model for zirconium processing in humans

    Science.gov (United States)

    2012-01-01

    Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152

  15. Design of Biomass Combined Heat and Power (CHP Systems based on Economic Risk using Minimax Regret Criterion

    Directory of Open Access Journals (Sweden)

    Ling Wen Choong

    2018-01-01

    Full Text Available It is a great challenge to identify optimum technologies for CHP systems that utilise biomass and convert it into heat and power. In this respect, industry decision makers are lacking in confidence to invest in biomass CHP due to economic risk from varying energy demand. This research work presents a linear programming systematic framework to design biomass CHP system based on potential loss of profit due to varying energy demand. Minimax Regret Criterion (MRC approach was used to assess maximum regret between selections of the given biomass CHP design based on energy demand. Based on this, the model determined an optimal biomass CHP design with minimum regret in economic opportunity. As Feed-in Tariff (FiT rates affects the revenue of the CHP plant, sensitivity analysis was then performed on FiT rates on the selection of biomass CHP design. Besides, design analysis on the trend of the optimum design selected by model was conducted. To demonstrate the proposed framework in this research, a case study was solved using the proposed approach. The case study focused on designing a biomass CHP system for a palm oil mill (POM due to large energy potential of oil palm biomass in Malaysia.

  16. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  17. Simulation of selected genealogies.

    Science.gov (United States)

    Slade, P F

    2000-02-01

    Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.

  18. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    Science.gov (United States)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  19. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  20. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  1. Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion

    Science.gov (United States)

    Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel

    2011-08-01

    The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.

  2. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP

  3. Varying Coefficient Panel Data Model in the Presence of Endogenous Selectivity and Fixed Effects

    OpenAIRE

    Malikov, Emir; Kumbhakar, Subal C.; Sun, Yiguo

    2013-01-01

    This paper considers a flexible panel data sample selection model in which (i) the outcome equation is permitted to take a semiparametric, varying coefficient form to capture potential parameter heterogeneity in the relationship of interest, (ii) both the outcome and (parametric) selection equations contain unobserved fixed effects and (iii) selection is generalized to a polychotomous case. We propose a two-stage estimator. Given consistent parameter estimates from the selection equation obta...

  4. Required experimental accuracy to select between supersymmetrical models

    Science.gov (United States)

    Grellscheid, David

    2004-03-01

    We will present a method to decide a priori whether various supersymmetrical scenarios can be distinguished based on sparticle mass data alone. For each model, a scan over all free SUSY breaking parameters reveals the extent of that model's physically allowed region of sparticle-mass-space. Based on the geometrical configuration of these regions in mass-space, it is possible to obtain an estimate of the required accuracy of future sparticle mass measurements to distinguish between the models. We will illustrate this algorithm with an example. This talk is based on work done in collaboration with B C Allanach (LAPTH, Annecy) and F Quevedo (DAMTP, Cambridge).

  5. Thermodynamic criterions for heat exchanger networks design

    Energy Technology Data Exchange (ETDEWEB)

    Guiglion, C.; Farhat, S.; Pibouleau, L.; Domenech, S. (Ecole Nationale Superieure d' Ingenieurs de Genie Chimique, 31 - Toulouse (France))

    1994-03-01

    This problem under consideration consists in selecting a heat exchanger network able to carry out a given request in heatings and coolings, in steady-state behaviour with constant pressure, by using if necessary cold and hot utilities, and under the constraint [Delta] T [>=] e in order to restrict investment costs. The exchanged energy and the produced entropy are compared in terms of operating costs. According to the request to be satisfied and the constraints of utility consumption, it is shown that the goal to minimize the produced entropy more or less agrees with the goal to minimize the exchanged energy. In the last part, the case where the cost of utility use is assumed to be proportional to the flow rate, with a proportionality constant only depending on the input thermodynamic state, is studied thoroughly. Under this assumption, the minimization of operating costs is compatible with the minimization of exchanged energy, and can be obtained via the maximization of the difficulty of the request part, made without using utilities. This point is based on the notion of a request easier than another, which explicits the quite vague idea that a request is all the more easier because it involves less heatings at high temperatures and less coolings at low temperatures. (author). 5 refs., 1 fig.

  6. Development of Solar Drying Model for Selected Cambodian Fish Species

    OpenAIRE

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h...

  7. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    Science.gov (United States)

    Gretchen H. Roffler; Michael K. Schwartz; Kristine Pilgrim; Sandra L. Talbot; George K. Sage; Layne G. Adams; Gordon Luikart

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is...

  8. The AP diameter of the pelvis: a new criterion for continence in the exstrophy complex?

    International Nuclear Information System (INIS)

    Ait-Ameur, A.; Kalifa, G.; Adamsbaum, C.; Wakim, A.; Dubousset, J.

    2001-01-01

    Reconstructive surgery of bladder exstrophy remains a challenge. By using CT of the pelvis, we suggest a new pre- and post-operative investigative procedure to define the AP diameter (APD) as a predictive criterion for continence in this anomaly. Patients and methods: Three axial CT slices were selected in nine children with exstrophy who had undergone neonatal reconstructive surgery. The three levels selected were the first sacral plate, the mid acetabular plane and the superior pubic spine. We used combined slices to measure: circle APD = distance between the first sacral vertebra and the pubic symphysis. circle Pubic diastasis (PD) circle Three angles defined on the transverse plane of the first sacral vertebra - iliac wing angle, sacropubic angle and acetabular version. In exstrophy, the angles demonstrate opening of the iliac wings and the pubic ramus, and acetabular retroversion compared to controls. Comparisons between controls, continent and incontinent patients reveal that in continent patients, APD increases with growth and seems to be a predictive criterion for continence, independent of diastasis of the pubic symphysis. We believe that CT of the pelvis with measurements of the APD should be performed in all neonates with bladder exstrophy before reconstructive surgery and for better understanding of the malformation. The APD seems to be predictive and may be a major criterion for continence, independent of PD. (orig.)

  9. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  10. A Neuronal Network Model for Pitch Selectivity and Representation

    OpenAIRE

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...

  11. Fuzzy Multicriteria Model for Selection of Vibration Technology

    Directory of Open Access Journals (Sweden)

    María Carmen Carnero

    2016-01-01

    Full Text Available The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this decision. This research describes an objective model using Fuzzy Analytic Hierarchy Process (FAHP to make a choice of the most suitable technology among portable vibration analysers. The aim is to create an easy-to-use model for processing, manufacturing, services, and research organizations, to guarantee adequate decision-making in the choice of vibration analysis technology. The model described recognises that judgements are often based on ambiguous, imprecise, or inadequate information that cannot provide precise values. The model incorporates judgements from several decision-makers who are experts in the field of vibration analysis, maintenance, and electronic devices. The model has been applied to a Health Care Organization.

  12. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  13. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    Science.gov (United States)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  14. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Science.gov (United States)

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    selection of samples by active learning (AL) used for subsequent model adaptation is advantageous compared to passive (random) selection in case that a drift leads to persistent prediction bias allowing more rapid adaptation at lower reference measurement rates. Fully unsupervised adaptation using FLEXFIS-PLS could improve predictive accuracy significantly for light drifts but was not able to fully compensate for prediction bias in case of significant lack of fit w.r.t. the latent variable space. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  16. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  17. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  18. Selected developments and applications of Leontief models in industrial ecology

    International Nuclear Information System (INIS)

    Stroemman, Anders Hammer

    2005-01-01

    Thesis Outline: This thesis investigates issues of environmental repercussions on processes of three spatial scales; a single process plant, a regional value chain and the global economy. The first paper investigates environmental repercussions caused by a single process plant using an open Leontief model with combined physical and monetary units in what is commonly referred to as a hybrid life cycle model. Physical capital requirements are treated as any other good. Resources and environmental stressors, thousands in total, are accounted for and assessed by aggregation using standard life cycle impact assessment methods. The second paper presents a methodology for establishing and combining input-output matrices and life-cycle inventories in a hybrid life cycle inventory. Information contained within different requirements matrices are combined and issues of double counting that arise are addressed and methods for eliminating these are developed and presented. The third paper is an extension of the first paper. Here the system analyzed is increased from a single plant and component in the production network to a series of nodes, constituting a value chain. The hybrid framework proposed in paper two is applied to analyze the use of natural gas, methanol and hydrogen as transportation fuels. The fourth paper presents the development of a World Trade Model with Bilateral Trade, an extension of the World Trade Model (Duchin, 2005). The model is based on comparative advantage and is formulated as a linear program. It endogenously determines the regional output of sectors and bilateral trade flows between regions. The model may be considered a Leontief substitution model where substitution of production is allowed between regions. The primal objective of the model requires the minimization of global factor costs. The fifth paper demonstrates how the World Trade Model with Bilateral Trade can be applied to address questions relevant for industrial ecology. The model is

  19. Vigas de concreto reforçadas com bambu Dendrocalamus giganteus. II: modelagem e critérios de dimensionamento Concrete beams reinforced with Dendrocalamus giganteus bamboo. II: modeling and design criterions

    Directory of Open Access Journals (Sweden)

    Humberto C. Lima Júnior

    2005-12-01

    Full Text Available Este trabalho corresponde à segunda parte de uma publicação sobre o comportamento estrutural de vigas de concreto reforçadas com bambu, na qual se apresenta e discute a modelagem dessas estruturas para, em seguida, serem apresentadas sugestões e hipóteses para o dimensionamento desses elementos estruturais. Para tanto, utilizou-se um modelo computacional baseado no Método dos Elementos Finitos, ao qual foram incorporadas sub-rotinas com as leis constitutivas do bambu e do concreto. Para calibração do modelo lançou-se mão dos dados experimentais de oito vigas de concreto reforçadas com bambu. Os resultados obtidos com o modelo computacional foram comparados com os experimentais, observando-se grande concordância. Finalmente, sugerem-se critérios de dimensionamento, os quais foram aplicados em um exemplo prático.This paper corresponds to the second part of a publication concerning the structural behaviour of concrete beams reinforced with bamboo. Modelling of concrete beams reinforced with bamboo-splint are presented and discussed. In addition, some design suggestions and hypotheses are presented. To perform the study, a Finite Element Program was used and some procedures were programmed and linked to it. The program was calibrated with the experimental data of eight concrete beams reinforced with bamboo-splint, whose results presented great accuracy. Finally, some design procedures were suggested and a practical example is given.

  20. Model validity and frequency band selection in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.