WorldWideScience

Sample records for model selection criterion

  1. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  2. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  3. Selecting Items for Criterion-Referenced Tests.

    Science.gov (United States)

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  4. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  5. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  6. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  7. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  8. A new diagnostic accuracy measure and cut-point selection criterion.

    Science.gov (United States)

    Dong, Tuochuan; Attwood, Kristopher; Hutson, Alan; Liu, Song; Tian, Lili

    2017-12-01

    Most diagnostic accuracy measures and criteria for selecting optimal cut-points are only applicable to diseases with binary or three stages. Currently, there exist two diagnostic measures for diseases with general k stages: the hypervolume under the manifold and the generalized Youden index. While hypervolume under the manifold cannot be used for cut-points selection, generalized Youden index is only defined upon correct classification rates. This paper proposes a new measure named maximum absolute determinant for diseases with k stages ([Formula: see text]). This comprehensive new measure utilizes all the available classification information and serves as a cut-points selection criterion as well. Both the geometric and probabilistic interpretations for the new measure are examined. Power and simulation studies are carried out to investigate its performance as a measure of diagnostic accuracy as well as cut-points selection criterion. A real data set from Alzheimer's Disease Neuroimaging Initiative is analyzed using the proposed maximum absolute determinant.

  9. Integral criterion for selecting nonlinear crystals for frequency conversion

    International Nuclear Information System (INIS)

    Grechin, Sergei G

    2009-01-01

    An integral criterion, which takes into account all parameters determining the conversion efficiency, is offered for selecting nonlinear crystals for frequency conversion. The angular phase-matching width is shown to be related to the beam walk-off angle. (nonlinear optical phenomena)

  10. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    Science.gov (United States)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  11. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  12. An Empirical Model Building Criterion Based on Prediction with Applications in Parametric Cost Estimation.

    Science.gov (United States)

    1980-08-01

    varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of

  13. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  14. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  15. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    Science.gov (United States)

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A graphical criterion for working fluid selection and thermodynamic system comparison in waste heat recovery

    International Nuclear Information System (INIS)

    Xi, Huan; Li, Ming-Jia; He, Ya-Ling; Tao, Wen-Quan

    2015-01-01

    In the present study, we proposed a graphical criterion called CE diagram by achieving the Pareto optimal solutions of the annual cash flow and exergy efficiency. This new graphical criterion enables both working fluid selection and thermodynamic system comparison for waste heat recovery. It's better than the existing criterion based on single objective optimization because it is graphical and intuitionistic in the form of diagram. The features of CE diagram were illustrated by studying 5 examples with different heat-source temperatures (ranging between 100 °C to 260 °C), 26 chlorine-free working fluids and two typical ORC systems including basic organic Rankine cycle(BORC) and recuperative organic Rankine cycle (RORC). It is found that the proposed graphical criterion is feasible and can be applied to any closed loop waste heat recovery thermodynamic systems and working fluids. - Highlights: • A graphical method for ORC system comparison/working fluid selection was proposed. • Multi-objectives genetic algorithm (MOGA) was applied for optimizing ORC systems. • Application cases were performed to demonstrate the usage of the proposed method.

  17. Criterion of Semi-Markov Dependent Risk Model

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun MO; Xiang Qun YANG

    2014-01-01

    A rigorous definition of semi-Markov dependent risk model is given. This model is a generalization of the Markov dependent risk model. A criterion and necessary conditions of semi-Markov dependent risk model are obtained. The results clarify relations between elements among semi-Markov dependent risk model more clear and are applicable for Markov dependent risk model.

  18. 76 FR 15961 - Funding Priorities and Selection Criterion; Disability and Rehabilitation Research Projects and...

    Science.gov (United States)

    2011-03-22

    ... priorities and a selection criterion for the Disability and Rehabilitation Research Projects and Centers... outcomes for underserved populations; (4) identify research gaps; (5) identify mechanisms of integrating research and practice; and (6) disseminate findings. This notice proposes two priorities and a selection...

  19. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  20. Accuracy of a selection criterion for glass forming ability in the Ni–Nb–Zr system

    International Nuclear Information System (INIS)

    Déo, L.P.; Oliveira, M.F. de

    2014-01-01

    Highlights: • We applied a selection in the Ni–Nb–Zr system to find alloys with high GFA. • We used the thermal parameter γ m to evaluate the GFA of alloys. • The correlation between the γ m parameter and R c in the studied system is poor. • The effect of oxygen impurity reduced dramatically the GFA of alloys. • Unknown intermetallic compounds reduced the accuracy of the criterion. - Abstract: Several theories have been developed and applied in metallic systems in order to find the best stoichiometries with high glass forming ability; however there is no universal theory to predict the glass forming ability in metallic systems. Recently a selection criterion was applied in the Zr–Ni–Cu system and it was found some correlation between experimental and theoretical data. This criterion correlates critical cooling rate for glass formation with topological instability of stable crystalline structures; average work function difference and average electron density difference among the constituent elements of the alloy. In the present work, this criterion was applied in the Ni–Nb–Zr system. It was investigated the influence of factors not considered in the calculation and on the accuracy of the criterion, such as unknown intermetallic compounds and oxygen contamination. Bulk amorphous specimens were produced by injection casting. The amorphous nature was analyzed by X-ray diffraction and differential scanning calorimetry; oxygen contamination was quantified by the inert gas fusion method

  1. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    Science.gov (United States)

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  2. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    Science.gov (United States)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  3. Drinking Water Quality Criterion - Based site Selection of Aquifer Storage and Recovery Scheme in Chou-Shui River Alluvial Fan

    Science.gov (United States)

    Huang, H. E.; Liang, C. P.; Jang, C. S.; Chen, J. S.

    2015-12-01

    Land subsidence due to groundwater exploitation is an urgent environmental problem in Choushui river alluvial fan in Taiwan. Aquifer storage and recovery (ASR), where excess surface water is injected into subsurface aquifers for later recovery, is one promising strategy for managing surplus water and may overcome water shortages. The performance of an ASR scheme is generally evaluated in terms of recovery efficiency, which is defined as percentage of water injected in to a system in an ASR site that fulfills the targeted water quality criterion. Site selection of an ASR scheme typically faces great challenges, due to the spatial variability of groundwater quality and hydrogeological condition. This study proposes a novel method for the ASR site selection based on drinking quality criterion. Simplified groundwater flow and contaminant transport model spatial distributions of the recovery efficiency with the help of the groundwater quality, hydrological condition, ASR operation. The results of this study may provide government administrator for establishing reliable ASR scheme.

  4. Using the Predictability Criterion for Selecting Extended Verbs for Shona Dictionaries

    Directory of Open Access Journals (Sweden)

    Emmanuel Chabata

    2012-09-01

    Full Text Available

    The paper examines the "predictability criterion", a classificatory tool which is used in selecting affixed word forms for dictionary entries. It focuses on the criterion as it has been used by the African Languages Lexical (ALLEX Project for selecting extended verbs to enter as headwords in the Project's first monolingual Shona dictionary Duramazwi ReChiShona. The article also examines the status of Shona verbal extensions in terms of their semantic input to the verb stems they are attached to. The paper was originally motivated by two observations: (a that predictability seems to be a matter of degree; and (b that the predictability criterion tended to be used inconsistently in the selection of extended verbs and senses for Duramazwi ReChiShona. An analysis of 412 productively extended verbs that were entered as headwords in Duramazwi ReChiShona shows that verbal extensions can bring both predictable and unpredictable senses to the verb stems they are attached to. The paper demonstrates that for an effective use of the predictability criterion for selecting extended verbs for Shona dictionaries, there is need for the lexicographer to have an in-depth understanding of the kinds of semantic movements that are caused when verb stems are extended. It shows the need to view verbal extensions in Shona as derivational morphemes, not inflectional morphemes as some earlier scholars have concluded.

     

     

    Die gebruik van die voorspelbaarheidskriterium om uitgebreide werkwoorde te selekteer vir Shonawoordeboeke

    Hierdie artikel ondersoek die "voorspelbaarheidskriterium", 'n klassifikasiehulpmiddel wat gebruik word om geaffigeerde woordvorme te selekteer as woordeboekinskrywings. Dit fokus op die kriterium soos dit gebruik is deur die African Language Lexical (ALLEX Project vir die selektering van uitgebreide werkwoorde as lemmas in die Projek se eerste eentalige Shonawoordeboek Duramazwi ReChiShona. In

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  6. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs.

    Science.gov (United States)

    Pérez, Ana G; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl

    2018-01-01

    Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties ("Picual" and "Arbequina") used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the

  7. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs

    Directory of Open Access Journals (Sweden)

    Ana G. Pérez

    2018-02-01

    Full Text Available Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties (“Picual” and “Arbequina” used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the

  8. DESTRUCTION CRITERION IN MODEL OF NON-LINEAR ELASTIC PLASTIC MEDIUM

    Directory of Open Access Journals (Sweden)

    O. L. Shved

    2014-01-01

    Full Text Available The paper considers a destruction criterion in a specific phenomenological model of elastic plastic medium which significantly differs from the known criteria. In case of vector interpretation of rank-2 symmetric tensors yield surface in the Cauchy stress space is formed by closed piecewise concave surfaces of its deviator sections with due account of experimental data. Section surface is determined by normal vector which is selected from two private vectors of criterial “deviator” operator. Such selection is not always possible in the case of anisotropy growth. It is expected that destruction can only start when a process point in the stress space is located in the current deviator section of the yield surface. It occurs when a critical point appears in the section, and a private value of an operator becomes N-fold in the point that determines the private vector corresponding to the normal vector. Unique and reasonable selection of the normal vector becomes impossible in the critical point and an yield criteria loses its significance in the point.When the destruction initiation is determined there is a possibility of a special case due to the proposed conic form of the yield surface. The deviator section degenerates into the point at the yield surface peak. Criterion formulation at the surface peak lies in the fact that there is no physically correct solution while using a state equation in regard to elastic distortion measures with a fixed tensor of elastic turn. Such usage of the equation is always possible for the rest points of the yield surface and it is considered as an obligatory condition for determination of the deviator section. A critical point is generally absent at any deviator section of the yield surface for isotropic material. A limiting value of the mean stress has been calculated at uniform tension.

  9. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  10. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  11. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  12. An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion

    Science.gov (United States)

    Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo

    2018-05-01

    The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.

  13. Multi-Criterion Two-Sided Matching of Public–Private Partnership Infrastructure Projects: Criteria and Methods

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-04-01

    Full Text Available Two kinds of evaluative criteria are associated with Public–Private Partnership (PPP infrastructure projects, i.e., private evaluative criteria and public evaluative criteria. These evaluative criteria are inversely related, that is, the higher the public benefits; the lower the private surplus. To balance evaluative criteria in the Two-Sided Matching (TSM decision, this paper develops a quantitative matching decision model to select an optimal matching scheme for PPP infrastructure projects based on the Hesitant Fuzzy Set (HFS under unknown evaluative criterion weights. In the model, HFS is introduced to describe values of the evaluative criteria and multi-criterion information is fully considered given by groups. The optimal model is built and solved by maximizing the whole deviation of each criterion so that the evaluative criterion weights are determined objectively. Then, the match-degree of the two sides is calculated and a multi-objective optimization model is introduced to select an optimal matching scheme via a min-max approach. The results provide new insights and implications of the influence on evaluative criteria in the TSM decision.

  14. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  15. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  16. A criterion for selecting renewable energy processes

    International Nuclear Information System (INIS)

    Searcy, Erin; Flynn, Peter C.

    2010-01-01

    We propose that minimum incremental cost per unit of greenhouse gas (GHG) reduction, in essence the carbon credit required to economically sustain a renewable energy plant, is the most appropriate social criterion for choosing from a myriad of alternatives. The application of this criterion is illustrated for four processing alternatives for straw/corn stover: production of power by direct combustion and biomass integrated gasification and combined cycle (BIGCC), and production of transportation fuel via lignocellulosic ethanol and Fischer Tropsch (FT) syndiesel. Ethanol requires a lower carbon credit than FT, and direct combustion a lower credit than BIGCC. For comparing processes that make a different form of end use energy, in this study ethanol vs. electrical power via direct combustion, the lowest carbon credit depends on the relative values of the two energy forms. When power is 70$ MW h -1 , ethanol production has a lower required carbon credit at oil prices greater than 600$ t -1 (80$ bbl -1 ). (author)

  17. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Omega-optimized portfolios: applying stochastic dominance criterion for the selection of the threshold return

    Directory of Open Access Journals (Sweden)

    Renaldas Vilkancas

    2016-05-01

    Full Text Available Purpose of the article: While using asymmetric risk-return measures an important role is played by selection of the investor‘s required or threshold rate of return. The scientific literature usually states that every investor should define this rate according to their degree of risk aversion. In this paper, it is attempted to look at the problem from a different perspective – empirical research is aimed at determining the influence of the threshold rate of return on the portfolio characteristics. Methodology/methods: In order to determine the threshold rate of return a stochastic dominance criterion was used. The results are verified using the commonly applied method of backtesting. Scientific aim: The aim of this paper is to propose a method allowing selecting the threshold rate of return reliably and objectively. Findings: Empirical research confirms that stochastic dominance criteria can be successfully applied to determine the rate of return preferred by the investor. Conclusions: A risk-free investment rate or simply a zero rate of return commonly used in practice is often justified neither by theoretical nor empirical studies. This work suggests determining the threshold rate of return by applying the stochastic dominance criterion

  19. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  20. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  1. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  2. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  3. Failure criterion effect on solid production prediction and selection of completion solution

    Directory of Open Access Journals (Sweden)

    Dariush Javani

    2017-12-01

    Full Text Available Production of fines together with reservoir fluid is called solid production. It varies from a few grams or less per ton of reservoir fluid posing only minor problems, to catastrophic amount possibly leading to erosion and complete filling of the borehole. This paper assesses solid production potential in a carbonate gas reservoir located in the south of Iran. Petrophysical logs obtained from the vertical well were employed to construct mechanical earth model. Then, two failure criteria, i.e. Mohr–Coulomb and Mogi–Coulomb, were used to investigate the potential of solid production of the well in the initial and depleted conditions of the reservoir. Using these two criteria, we estimated critical collapse pressure and compared them to the reservoir pressure. Solid production occurs if collapse pressure is greater than pore pressure. Results indicate that the two failure criteria show different estimations of solid production potential of the studied reservoir. Mohr–Coulomb failure criterion estimated solid production in both initial and depleted conditions, where Mogi–Coulomb criterion predicted no solid production in the initial condition of reservoir. Based on Mogi–Coulomb criterion, the well may not require completion solutions like perforated liner, until at least 60% of reservoir pressure was depleted which leads to decrease in operation cost and time.

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  5. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  6. Fulfillment of the kinetic Bohm criterion in a quasineutral particle-in-cell model

    International Nuclear Information System (INIS)

    Ahedo, Eduardo; Santos, Robert; Parra, Felix I.

    2010-01-01

    Quasineutral particle-in-cell models of ions must fulfill the kinetic Bohm criterion, in its inequality form, at the domain boundary in order to match correctly with solutions of the Debye sheaths tied to the walls. The simple, fluid form of the Bohm criterion is shown to be a bad approximation of the exact, kinetic form when the ion velocity distribution function has a significant dispersion and involves different charge numbers. The fulfillment of the Bohm criterion is measured by a weighting algorithm at the boundary, but linear weighting algorithms have difficulties to reproduce the nonlinear behavior around the sheath edge. A surface weighting algorithm with an extended temporal weighting is proposed and shown to behave better than the standard volumetric weighting. Still, this must be supplemented by a forcing algorithm of the kinetic Bohm criterion. This postulates a small potential fall in a supplementary, thin, transition layer. The electron-wall interaction is shown to be of little relevance in the fulfillment of the Bohm criterion.

  7. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Stochastic Learning and the Intuitive Criterion in Simple Signaling Games

    DEFF Research Database (Denmark)

    Sloth, Birgitte; Whitta-Jacobsen, Hans Jørgen

    A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion......A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion...

  9. Using Akaike's information theoretic criterion in mixed-effects modeling of pharmacokinetic data: a simulation study [version 3; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Erik Olofsen

    2015-07-01

    Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models

  10. Does the Lowest Bid Price Evaluation Criterion Make for a More Efficient Public Procurement Selection Criterion? (Case of the Czech Republic

    Directory of Open Access Journals (Sweden)

    Ochrana František

    2015-06-01

    Full Text Available Through the institute of public procurement a considerable volume of financial resources is allocated. It is therefore in the interest of contracting entities to seek ways of how to achieve an efficient allocation of resources. Some public contract-awarding entities, along with some public-administration authorities in the Czech Republic, believe that the use of a single evaluation criterion (the lowest bid price results in a more efficient tender for a public contract. It was found that contracting entities in the Czech Republic strongly prefer to use the lowest bid price criterion. Within the examined sample, 86.5 % of public procurements were evaluated this way. The analysis of the examined sample of public contracts proved that the choice of an evaluation criterion, even the preference of the lowest bid price criterion, does not have any obvious impact on the final cost of a public contract. The study concludes that it is inappropriate to prefer the criterion of the lowest bid price within the evaluation of public contracts that are characterised by their complexity (including public contracts for construction works and public service contracts. The findings of the Supreme Audit Office related to the inspection of public contracts indicate that when using the lowest bid price as an evaluation criterion, a public contract may indeed be tendered with the lowest bid price, but not necessarily the best offer in terms of supplied quality. It is therefore not appropriate to use the lowest bid price evaluation criterion to such an extent for the purpose of evaluating work and services. Any improvement to this situation requires a corresponding amendment to the Law on Public Contracts and mainly a radical change in the attitude of the Office for the Protection of Competition towards proposed changes, as indicated within the conclusions and recommendations proposed by this study.

  11. [On the problems of the evolutionary optimization of life history. II. To justification of optimization criterion for nonlinear Leslie model].

    Science.gov (United States)

    Pasekov, V P

    2013-03-01

    The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.

  12. Experiments and modeling of ballistic penetration using an energy failure criterion

    Directory of Open Access Journals (Sweden)

    Dolinski M.

    2015-01-01

    Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the

  13. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  14. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  15. Continuous-Time Portfolio Selection and Option Pricing under Risk-Minimization Criterion in an Incomplete Market

    Directory of Open Access Journals (Sweden)

    Xinfeng Ruan

    2013-01-01

    Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.

  16. Building a maintenance policy through a multi-criterion decision-making model

    Science.gov (United States)

    Faghihinia, Elahe; Mollaverdi, Naser

    2012-08-01

    A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.

  17. Suboptimal Criterion Learning in Static and Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Elyse H Norton

    2017-01-01

    Full Text Available Humans often make decisions based on uncertain sensory information. Signal detection theory (SDT describes detection and discrimination decisions as a comparison of stimulus "strength" to a fixed decision criterion. However, recent research suggests that current responses depend on the recent history of stimuli and previous responses, suggesting that the decision criterion is updated trial-by-trial. The mechanisms underpinning criterion setting remain unknown. Here, we examine how observers learn to set a decision criterion in an orientation-discrimination task under both static and dynamic conditions. To investigate mechanisms underlying trial-by-trial criterion placement, we introduce a novel task in which participants explicitly set the criterion, and compare it to a more traditional discrimination task, allowing us to model this explicit indication of criterion dynamics. In each task, stimuli were ellipses with principal orientations drawn from two categories: Gaussian distributions with different means and equal variance. In the covert-criterion task, observers categorized a displayed ellipse. In the overt-criterion task, observers adjusted the orientation of a line that served as the discrimination criterion for a subsequently presented ellipse. We compared performance to the ideal Bayesian learner and several suboptimal models that varied in both computational and memory demands. Under static and dynamic conditions, we found that, in both tasks, observers used suboptimal learning rules. In most conditions, a model in which the recent history of past samples determines a belief about category means fit the data best for most observers and on average. Our results reveal dynamic adjustment of discrimination criterion, even after prolonged training, and indicate how decision criteria are updated over time.

  18. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  19. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    Science.gov (United States)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  20. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  1. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  2. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  3. An extended geometric criterion for chaos in the Dicke model

    International Nuclear Information System (INIS)

    Li Jiangdan; Zhang Suying

    2010-01-01

    We extend HBLSL's (Horwitz, Ben Zion, Lewkowicz, Schiffer and Levitan) new Riemannian geometric criterion for chaotic motion to Hamiltonian systems of weak coupling of potential and momenta by defining the 'mean unstable ratio'. We discuss the Dicke model of an unstable Hamiltonian system in detail and show that our results are in good agreement with that of the computation of Lyapunov characteristic exponents.

  4. Ginsburg criterion for an equilibrium superradiant model in the dynamic approach

    International Nuclear Information System (INIS)

    Trache, M.

    1991-10-01

    Some critical properties of an equilibrium superradiant model are discussed, taking into account the quantum fluctuations of the field variables. The critical region is calculated using the Ginsburg criterion, underlining the role of the atomic concentration as a control parameter of the phase transition. (author). 16 refs, 1 fig

  5. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    Science.gov (United States)

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  6. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  7. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  8. Characteristics of Criterion-Referenced Instruments: Implications for Materials Selection for the Learning Disabled.

    Science.gov (United States)

    Blasi, Joyce F.

    Discussed are characteristics of criterion referenced reading tests for use with learning disabled (LD) children, and analyzed are the Basic Educational Skills Inventory (BESI), the Prescriptive Reading Inventory (PRI), and the Cooper-McGuire Diagnostic Work-Analysis Test (CooperMcGuire). Criterion referenced tests are defined; and problems in…

  9. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  10. A Novel Non-Invasive Selection Criterion for the Preservation of Primitive Dutch Konik Horses.

    Science.gov (United States)

    May-Davis, Sharon; Brown, Wendy Y; Shorter, Kathleen; Vermeulen, Zefanja; Butler, Raquel; Koekkoek, Marianne

    2018-02-01

    The Dutch Konik is valued from a genetic conservation perspective and also for its role in preservation of natural landscapes. The primary management objective for the captive breeding of this primitive horse is to maintain its genetic purity, whilst also maintaining the nature reserves on which they graze. Breeding selection has traditionally been based on phenotypic characteristics consistent with the breed description, and the selection of animals for removal from the breeding program is problematic at times due to high uniformity within the breed, particularly in height at the wither, colour (mouse to grey dun) and presence of primitive markings. With the objective of identifying an additional non-invasive selection criterion with potential uniqueness to the Dutch Konik, this study investigates the anatomic parameters of the distal equine limb, with a specific focus on the relative lengths of the individual splint bones. Post-mortem dissections performed on distal limbs of Dutch Konik ( n = 47) and modern domesticated horses ( n = 120) revealed significant differences in relation to the length and symmetry of the 2nd and 4th Metacarpals and Metatarsals. Distal limb characteristics with apparent uniqueness to the Dutch Konik are described which could be an important tool in the selection and preservation of the breed.

  11. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  12. Criterion for the selection of a system of treatment of residues and their application to the wine of the alcoholic industry

    International Nuclear Information System (INIS)

    Caicedo M, Luis Alfonso; Fonseca, Jose Joaquin; Rodriguez, Gerardo

    1996-01-01

    The selection of a system of residues treatment should follow the criterion of the process denominated BATEA (better available and feasible technical and economical process). Because their application is difficult for not having objective parameters of evaluation, a method is presented that classifies the evaluation criterions in general and specific. For the quantification of these aspects factors like FQO are used, FCI, FTR, FD and the factor of applicability of the treatment (FAT). The method applied to the wine; allows concluding that it is the evaporation the best treatment system for this process, while other systems are not developed or it increases the recompensing rate

  13. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  14. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    Science.gov (United States)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  15. Plasma sheath criterion in thermal electronegative plasmas

    International Nuclear Information System (INIS)

    Ghomi, Hamid; Khoramabadi, Mansour; Ghorannevis, Mahmod; Shukla, Padma Kant

    2010-01-01

    The sheath formation criterion in electronegative plasma is examined. By using a multifluid model, it is shown that in a collisional sheath there will be upper as well as lower limits for the sheath velocity criterion. However, the parameters of the negative ions only affect the lower limit.

  16. Inviscid criterion for decomposing scales

    Science.gov (United States)

    Zhao, Dongxiao; Aluie, Hussein

    2018-05-01

    The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.

  17. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  18. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  19. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  20. The Distributed Criterion Design

    Science.gov (United States)

    McDougall, Dennis

    2006-01-01

    This article describes and illustrates a novel form of the changing criterion design called the distributed criterion design, which represents perhaps the first advance in the changing criterion design in four decades. The distributed criterion design incorporates elements of the multiple baseline and A-B-A-B designs and is well suited to applied…

  1. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  2. A new objective criterion for IRIS localization

    International Nuclear Information System (INIS)

    Basit, A.

    2010-01-01

    Iris localization is the most important step in iris recognition systems. For commonly used databases, exact data is not given which describe the true results of localization. To cope with this problem a new objective criterion for iris localization is proposed in this paper based on our visual system. A specific number of points are selected on pupil boundary, iris boundary, upper eyelid and lower eyelid using the original image and then distance from these points to the result of complete iris localization has been calculated. If the determined distance is below a certain threshold then iris localization is considered correct. Experimental results show that proposed criterion is very effective. (author)

  3. On the Jeans criterion

    International Nuclear Information System (INIS)

    Whitworth, A.P.

    1980-01-01

    The Jeans criterion is first stated and distinguished from the Virial Theorem. Then it is discussed how the Jeans criterion can be derived from the Virial Theorem and the inherent shortcomings in this derivation. Finally, it is indicated how these shortcomings might be overcome. The Jeans criterion is a fragmentation - or condensation -criterion. An expression is given, connecting the fragmentation of an unstable extended medium into masses Msub(J). Rather than picturing the background medium fragmenting, it is probably more appropriate to envisage these masses Msub(J) 'condensing' out of the background medium. On the condensation picture some fraction of the background material separates out into coherent bound nodules under the pull of its self-gravity. For this reason the Jeans criterion is discussed as a condensation condition, reserving the term fragmentation for a different process. The Virial Theorem provides a contraction criterion. This is described with reference to a spherical cloud and is developed to derive the Jeans criterion. (U.K.)

  4. Unitary Evolution as a Uniqueness Criterion

    Science.gov (United States)

    Cortez, J.; Mena Marugán, G. A.; Olmedo, J.; Velhinho, J. M.

    2015-01-01

    It is well known that the process of quantizing field theories is plagued with ambiguities. First, there is ambiguity in the choice of basic variables describing the system. Second, once a choice of field variables has been made, there is ambiguity concerning the selection of a quantum representation of the corresponding canonical commutation relations. The natural strategy to remove these ambiguities is to demand positivity of energy and to invoke symmetries, namely by requiring that classical symmetries become unitarily implemented in the quantum realm. The success of this strategy depends, however, on the existence of a sufficiently large group of symmetries, usually including time-translation invariance. These criteria are therefore generally insufficient in non-stationary situations, as is typical for free fields in curved spacetimes. Recently, the criterion of unitary implementation of the dynamics has been proposed in order to select a unique quantization in the context of manifestly non-stationary systems. Specifically, the unitarity criterion, together with the requirement of invariance under spatial symmetries, has been successfully employed to remove the ambiguities in the quantization of linearly polarized Gowdy models as well as in the quantization of a scalar field with time varying mass, propagating in a static background whose spatial topology is either of a d-sphere (with d = 1, 2, 3) or a three torus. Following Ref. 3, we will see here that the symmetry and unitarity criteria allows for a complete removal of the ambiguities in the quantization of scalar fields propagating in static spacetimes with compact spatial sections, obeying field equations with an explicitly time-dependent mass, of the form ddot φ - Δ φ + s(t)φ = 0 . These results apply in particular to free fields in spacetimes which, like e.g. in the closed FRW models, are conformal to a static spacetime, by means of an exclusively time-dependent conformal factor. In fact, in such

  5. A multipole acceptability criterion for electronic structure theory

    International Nuclear Information System (INIS)

    Schwegler, E.; Challacombe, M.; Head-Gordon, M.

    1998-01-01

    Accurate and computationally inexpensive estimates of multipole expansion errors are crucial to the success of several fast electronic structure methods. In this paper, a new nonempirical multipole acceptability criterion is described that is directly applicable to expansions of high order moments. Several model calculations typical of electronic structure theory are presented to demonstrate its performance. For cases involving small translation distances, accuracies are increased by up to five orders of magnitude over an empirical criterion. The new multipole acceptance criterion is on average within an order of magnitude of the exact expansion error. Use of the multipole acceptance criterion in hierarchical multipole based methods as well as in traditional electronic structure methods is discussed. copyright 1998 American Institute of Physics

  6. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  7. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    International Nuclear Information System (INIS)

    Alves, J.L.; Oliveira, M.C.; Menezes, L.F.

    2004-01-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results

  8. Failure Criterion for Brick Masonry: A Micro-Mechanics Approach

    Directory of Open Access Journals (Sweden)

    Kawa Marek

    2015-02-01

    Full Text Available The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading conditions at different orientations of bed joints. The performance of the approach is verified against solutions obtained for corresponding layered and block microstructures, which provides the upper and lower strength bounds for masonry microstructure, respectively. Subsequently, a phenomenological anisotropic strength criterion for masonry microstructure is proposed. The criterion has a form of conjunction of Jaeger critical plane condition and Tsai-Wu criterion. The model proposed is identified based on the fitting of numerical results obtained from the microstructural analysis. Identified criterion is then verified against results obtained for different loading orientations. It appears that strength of masonry microstructure can be satisfactorily described by the criterion proposed.

  9. Decision models for use with criterion-referenced tests

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1980-01-01

    The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive

  10. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  11. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Decision criterion dynamics in animals performing an auditory detection task.

    Directory of Open Access Journals (Sweden)

    Robert W Mill

    Full Text Available Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding "yes" during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans and in other psychological domains (e.g., vision, memory.

  13. Numerical and Experimental Validation of a New Damage Initiation Criterion

    Science.gov (United States)

    Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.

    2017-09-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.

  14. Modeling a failure criterion for U-Mo/Al dispersion fuel

    Science.gov (United States)

    Oh, Jae-Yong; Kim, Yeon Soo; Tahk, Young-Wook; Kim, Hyun-Jung; Kong, Eui-Hyun; Yim, Jeong-Sik

    2016-05-01

    The breakaway swelling in U-Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U-Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO-4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE, E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.

  15. Modeling a failure criterion for U–Mo/Al dispersion fuel

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong, E-mail: tylor@kaeri.re.kr [Korea Atomic Energy Research Institute, 111, Daedeok-Daero 989 Beon-Gil, Yuseong-Gu, Daejeon 305-353 (Korea, Republic of); Kim, Yeon Soo [Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Tahk, Young-Wook; Kim, Hyun-Jung; Kong, Eui-Hyun; Yim, Jeong-Sik [Korea Atomic Energy Research Institute, 111, Daedeok-Daero 989 Beon-Gil, Yuseong-Gu, Daejeon 305-353 (Korea, Republic of)

    2016-05-15

    The breakaway swelling in U–Mo/Al dispersion fuel is known to be caused by large pore formation enhanced by interaction layer (IL) growth between fuel particles and Al matrix. In this study, a critical IL thickness was defined as a criterion for the formation of a large pore in U–Mo/Al dispersion fuel. Specifically, the critical IL thickness is given when two neighboring fuel particles come into contact with each other in the developed IL. The model was verified using the irradiation data from the RERTR tests and KOMO-4 test. The model application to full-sized sample irradiations such as IRISs, FUTURE, E-FUTURE, and AFIP-1 tests resulted in conservative predictions. The parametric study revealed that the fuel particle size and the homogeneity of the fuel particle distribution are influential for fuel performance.

  16. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  17. The applicability of fair selection models in the South African context

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    1995-06-01

    Full Text Available This article reviews several models that are aimed at achieving fair selection in situations in which underrepresented groups tend to obtain lower scores on selection tests. Whereas predictive bias is a statistical concept that refers to systematic errors in the prediction of individuals' criterion scores, selection fairness pertains to the extent to which selection results meet certain socio-political demands. The regression and equal-risk models adjust for differences in the criterion-on-test regression lines of different groups. The constant ratio, conditional probability and equal probability models manipulate the test cutoff scores of different groups so that certain ratios formed between different selection outcomes (correct acceptances, correct rejections, incorrect acceptances, incorrect rejections are the same for such groups. The decision-theoretic approach requires that utilities be attached to these different outcomes for different groups. These procedures are not only eminently suited to accommodate calls for affirmative action, but they also serve the cause of transparency. Opsomming Hierdie artikel verskaf 'n oorsig van verskeie modelle om billike keuring te verkry in situasies waar onderverteen-woordigende groepe geneig is om swakker op keuringstoetse te vaar. Terwyl voorspellingsydigheid 'n statistiese begrip is wat betrekking het op stelselmatige foute in die voorspelling van individue se kriteriumtellings, het keuringsbillikheid te make met die mate waarin keuringsresultate aan sekere sosiaal-politieke vereistes voldoen. Die regressieen gelyke-risiko-modelle maak aanpassings vir verskille in die kriterium-op-toetsregressielyne van verskillende groepe. Die konstante-verhoudings, voorwaardelike-waarskynlikheids- en gelyke-waarskynlikheidsmodelle manipuleer die toetsafkappunte van verskillende groepe sodat sekere verhoudings wat tussen keuringsresultate (korrekte aanvaardings, verkeerde aanvaardings, korrekte verwerpings

  18. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  19. Unified Bohm criterion

    Energy Technology Data Exchange (ETDEWEB)

    Kos, L. [LECAD Laboratory, Faculty of Mechanical Engineering, University of Ljubljana, SI-1000 Ljubljana (Slovenia); Tskhakaya, D. D.; Jelić, N. [Institute for Theoretical Physics, Fusion@ÖAW, University of Innsbruck, A-6020 Innsbruck (Austria)

    2015-09-15

    Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function) without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.

  20. Key Determinant Derivations for Information Technology Disaster Recovery Site Selection by the Multi-Criterion Decision Making Method

    Directory of Open Access Journals (Sweden)

    Chia-Lee Yang

    2015-05-01

    Full Text Available Disaster recovery sites are an important mechanism in continuous IT system operations. Such mechanisms can sustain IT availability and reduce business losses during natural or human-made disasters. Concerning the cost and risk aspects, the IT disaster-recovery site selection problems are multi-criterion decision making (MCDM problems in nature. For such problems, the decision aspects include the availability of the service, recovery time requirements, service performance, and more. The importance and complexities of IT disaster recovery sites increases with advances in IT and the categories of possible disasters. The modern IT disaster recovery site selection process requires further investigation. However, very few researchers tried to study related issues during past years based on the authors’ extremely limited knowledge. Thus, this paper aims to derive the aspects and criteria for evaluating and selecting a modern IT disaster recovery site. A hybrid MCDM framework consisting of the Decision Making Trial and Evaluation Laboratory (DEMATEL and the Analytic Network Process (ANP will be proposed to construct the complex influence relations between aspects as well as criteria and further, derive weight associated with each aspect and criteria. The criteria with higher weight can be used for evaluating and selecting the most suitable IT disaster recovery sites. In the future, the proposed analytic framework can be used for evaluating and selecting a disaster recovery site for data centers by public institutes or private firms.

  1. P2-17: Individual Differences in Dynamic Criterion Shifts during Perceptual Decision Making

    Directory of Open Access Journals (Sweden)

    Issac Rhim

    2012-10-01

    Full Text Available Perceptual decision-making involves placing an optimal criterion on the axis of encoded sensory evidence to maximize outcomes for choices. Optimal criterion setting becomes critical particularly when neural representations of sensory inputs are noisy and feedbacks for perceptual choices vary over time in an unpredictable manner. Here we monitored time courses of decision criteria that are adopted by human subjects while abruptly shifting the criterion of stochastic feedback to perceptual choices with certain amounts in an unpredictable direction and at an unpredictable point of time. Subjects viewed a brief (0.3 s, thin (.07 deg annulus around the fixation and were forced to judge whether the annulus was smaller or larger than an unknown boundary. We estimated moment-to-moment criteria by fitting a cumulative Gaussian function to the data within a sliding window of trials that are locked to a shift in feedback criterion. Unpredictable shifts in feedback criterion successfully induced shifts in actual decision criterion towards an optimal criterion for many of subjects, but with time delay and amount of shifts varying across individual subjects. There were disproportionately more numbers of overshooters (reaching and then surpassing the optimal criterion required than undershooters (subpar reach, with a significant anti-correlation with sensory sensitivity. To find a mechanism that generates these individual differences, we developed a dynamic criterion learning model by modifying a reinforcement learning model, which assumes that a criterion is adjusted every trial by a weighted discrepancy between actual and expected rewards.

  2. SOCIO-PSYCHOLOGICAL CRITERIONS OF FAMILY LIFESTYLE TYPOLOGY

    Directory of Open Access Journals (Sweden)

    Yekaterina Anatolievna Yumkina

    2015-02-01

    Full Text Available The purpose of this article is to present socio-psychological criterions of family lifestyle typology, which were found during theoretical modelling and empirical research work. It is important in fundamental and practical aspects. St-Petersburg students (n = 116, from 19 to 21 years old were examined by special questionnaire «Family relationship and home» (Kunitsi-na V.N., Yumkina Ye.A., 2012 which measures different aspects of family lifestyle. We also used complex of methods that gave us information about personal values, self-rating and parent-child relationships. Dates were divided into six groups according to three main criterions of family lifestyle typology: social environment of family life, family activity, and family interpersonal relationships. There were found statistically significant differences between pairs of group from every criterions. The results can be useful in spheres dealing with family crisis, family development, family traditions etc.

  3. Time to Criterion: An Experimental Study.

    Science.gov (United States)

    Anderson, Lorin W.

    The purpose of the study was to investigate the magnitude of individual differences in time-to-criterion and the stability of these differences. Time-to-criterion was defined in two ways: the amount of elapsed time required to attain the criterion level and the amount of on-task time required to attain the criterion level. Ninety students were…

  4. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  5. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling

    DEFF Research Database (Denmark)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief

    2018-01-01

    by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded...

  6. A systems evaluation model for selecting spent nuclear fuel storage concepts

    International Nuclear Information System (INIS)

    Postula, F.D.; Finch, W.C.; Morissette, R.P.

    1982-01-01

    This paper describes a system evaluation approach used to identify and evaluate monitored, retrievable fuel storage concepts that fulfill ten key criteria for meeting the functional requirements and system objectives of the National Nuclear Waste Management Program. The selection criteria include health and safety, schedules, costs, socio-economic factors and environmental factors. The methodology used to establish the selection criteria, develop a weight of importance for each criterion and assess the relative merit of each storage system is discussed. The impact of cost relative to technical criteria is examined along with experience in obtaining relative merit data and its application in the model. Topics considered include spent fuel storage requirements, functional requirements, preliminary screening, and Monitored Retrievable Storage (MRS) system evaluation. It is concluded that the proposed system evaluation model is universally applicable when many concepts in various stages of design and cost development need to be evaluated

  7. A simple stability criterion for CANDU figure-of-eight flow oscillations

    International Nuclear Information System (INIS)

    Gulshani, P.; Spinks, N.J.

    1983-01-01

    Potential flow oscillations in CANDU reactor primary heat transport system are analyzed in terms of a simple, linearized model. A simple, algebraic stability criterion is obtained. The model predictions are found to be in good agreement with those of thermohydraulic codes for high pressure natural circulation conditions. For normal operating conditions the criterion predicts the correct trend but overlooks important stabilizing effects. The model clarifies the instability mechanism; namely the response of enthalpy and, hence, pressure in the boiling region to flow change

  8. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  9. Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion

    Science.gov (United States)

    Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel

    2011-08-01

    The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.

  10. On the upper bound in the Bohm sheath criterion

    Energy Technology Data Exchange (ETDEWEB)

    Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su [Russian Academy of Sciences, Budker Institute of Nuclear Physics, Siberian Branch (Russian Federation)

    2016-02-15

    The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ion source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.

  11. Fuel-pin cladding transient failure strain criterion

    International Nuclear Information System (INIS)

    Bard, F.E.; Duncan, D.R.; Hunter, C.W.

    1983-01-01

    A criterion for cladding failure based on accumulated strain was developed for mixed uranium-plutonium oxide fuel pins and used to interpret the calculated strain results from failed transient fuel pin experiments conducted in the Transient Reactor Test (TREAT) facility. The new STRAIN criterion replaced a stress-based criterion that depends on the DORN parameter and that incorrectly predicted fuel pin failure for transient tested fuel pins. This paper describes the STRAIN criterion and compares its prediction with those of the stress-based criterion

  12. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  13. Development of failure criterion for Kevlar-epoxy fabric laminates

    Science.gov (United States)

    Tennyson, R. C.; Elliott, W. G.

    1984-01-01

    The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.

  14. Zero mass field quantization and Kibble's long-range force criterion for the Goldstone theorem

    International Nuclear Information System (INIS)

    Wright, S.H.

    1981-01-01

    The central theme of the dissertation is an investigation of the long-range force criterion used by Kibble in his discussion of the Goldstone Theorem. This investigation is broken up into the following sections: I. Introduction. Spontaneous symmetry breaking, the Goldstone Theorem and the conditions under which it holds are discussed. II. Massless Wave Expansions. In order to make explicit calculations of the operator commutators used in applying Kibble's criterion, it is necessary to work out the operator expansions for a massless field. Unusual results are obtained which include operators corresponding to classical macroscopic field modes. III. The Kibble Criterion for Simple Models Exhibiting Spontaneously Broken Symmetries. The results of the previous section are applied to simple models with spontaneously broken symmetries, namely, the real scalar massless field and the Goldstone model without gauge coupling. IV. The Higgs Mechanism in Classical Field Theory. It is shown that the Higgs Mechanism has a simple interpretation in terms of classical field theory, namely, that it arises from a derivative coupling term between the Goldstone fields and the gauge fields. V. The Higgs Mechanism and Kibble's Criterion. This section draws together the material discussed in sections II to IV. Explicit calculations are made to evaluate Kibble's criterion on a Goldstone-Higgs type of model in the Coulomb gauge. It is found, as expected, that the criterion is not met, but not for reasons relating to the range of the mediating force. By referring to the findings of sections III and IV, it is concluded that the common denominator underlying both the Higgs Mechanism and the failure of Kibble's criterion is a structural aspect of the field equations: derivative coupling between fields

  15. Critical Length Criterion and the Arc Chain Model for Calculating the Arcing Time of the Secondary Arc Related to AC Transmission Lines

    International Nuclear Information System (INIS)

    Cong Haoxi; Li Qingmin; Xing Jinyuan; Li Jinsong; Chen Qiang

    2015-01-01

    The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. (paper)

  16. Methods for selecting fixed-effect models for heterogeneous codon evolution, with comments on their application to gene and genome data.

    Science.gov (United States)

    Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P

    2007-02-08

    Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori

  17. Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.

    Science.gov (United States)

    Velichkin, Vladimir A.; Zavyalov, Vladimir A.

    2018-03-01

    This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.

  18. Sensor Calibration Design Based on D-Optimality Criterion

    Directory of Open Access Journals (Sweden)

    Hajiyev Chingiz

    2016-09-01

    Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.

  19. Prediction of Hot Tearing Using a Dimensionless Niyama Criterion

    Science.gov (United States)

    Monroe, Charles; Beckermann, Christoph

    2014-08-01

    The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.

  20. The Bohm criterion for rf discharges

    International Nuclear Information System (INIS)

    Meijer, P.M.; Goedheer, W.J.

    1991-01-01

    The well-known dc Bohm criterion is extended to rf discharges. Both low- (ω rf much-lt ω pi ) and high-(ω pi much-lt ω rf ) frequency regimes are considered. For low frequencies, the dc Bohm criterion holds. This criterion states that the initial energy of the ions entering the sheath must exceed a limit in order to obtain a stable sheath. For high frequencies, a modified limit is derived, which is somewhat lower than that of the dc Bohm criterion. The resulting ion current density in a high-frequency sheath is only a few percent lower than that for the dc case

  1. Applying a new criterion to predict glass forming alloys in the Zr–Ni–Cu ternary system

    Energy Technology Data Exchange (ETDEWEB)

    Déo, L.P., E-mail: leonardopratavieira@gmail.com [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil); Mendes, M.A.B., E-mail: marcio.andreato@gmail.com [Universidade Federal de São Carlos, DEMa - Rod. Washington Luiz, Km 235 – São Carlos, SP 13565-905 (Brazil); Costa, A.M.S., E-mail: alexmatos1980@gmail.com [Universidade de São Paulo, DEMAR, EEL – Polo Urbo-Industrial Gleba AI-6, s/n – Lorena, SP 12600-970 (Brazil); Campos Neto, N.D., E-mail: nelsonddcn@gmail.com [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil); Oliveira, M.F. de, E-mail: falcao@sc.usp.br [Universidade de São Paulo, EESC, SMM - Av. Trabalhador São Carlense, 400 – São Carlos, SP 13566-590 (Brazil)

    2013-03-15

    Highlights: ► Calculation to predict and select the glass forming ability (GFA) of metallic alloys in Zr–Ni–Cu system. ► Good correlation between theoretical and experimental GFA samples. ► Combination of X-ray diffraction (XRD) and differential scanning calorimetry (DSC) techniques mainly to characterize the samples. ► Oxygen impurity dramatically reduced the GFA. ► The selection criterion used opens the possibility to obtain new amorphous alloys, reducing the experimental procedures of trial and error. -- Abstract: A new criterion has been recently proposed to predict and select the glass forming ability (GFA) of metallic alloys. It was found that the critical cooling rate for glass formation (R{sub c}) correlates well with a proper combination of two factors, the minimum topological instability (λ{sub min}) and the thermodynamic parameter (Δh). The (λ{sub min}) criterion is based on the concept of topological instability of stable crystalline structures and (Δh) depends on the average work function difference (Δϕ) and the average electron density difference Δn{sub ws}{sup 1/3} among the constituent elements of the alloy. In the present work, the selection criterion was applied in the Zr–Ni–Cu system and its predictability was analyzed experimentally. Ribbon-shaped and splat-shaped samples were produced by melt-spinning and splat-cooling techniques respectively. The crystallization content and behavior were analyzed by X-ray diffraction (XRD) and differential scanning calorimetry (DSC), respectively. The results showed a good correlation between the theoretical GFA values and the amorphous phase percentages found in different alloy compositions.

  2. Towards chaos criterion in quantum field theory

    OpenAIRE

    Kuvshinov, V. I.; Kuzmin, A. V.

    2002-01-01

    Chaos criterion for quantum field theory is proposed. Its correspondence with classical chaos criterion in semi-classical regime is shown. It is demonstrated for real scalar field that proposed chaos criterion can be used to investigate stability of classical solutions of field equations.

  3. A New Multiaxial High-Cycle Fatigue Criterion Based on the Critical Plane for Ductile and Brittle Materials

    Science.gov (United States)

    Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei

    2015-02-01

    An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.

  4. Optimal order and time-step criterion for Aarseth-type N-body integrators

    International Nuclear Information System (INIS)

    Makino, Junichiro

    1991-01-01

    How the selection of the time-step criterion and the order of the integrator change the efficiency of Aarseth-type N-body integrators is discussed. An alternative to Aarseth's scheme based on the direct calculation of the time derivative of the force using the Hermite interpolation is compared to Aarseth's scheme, which uses the Newton interpolation to construct the predictor and corrector. How the number of particles in the system changes the behavior of integrators is examined. The Hermite scheme allows a time step twice as large as that for the standard Aarseth scheme for the same accuracy. The calculation cost of the Hermite scheme per time step is roughly twice as much as that of the standard Aarseth scheme. The optimal order of the integrators depends on both the particle number and the accuracy required. The time-step criterion of the standard Aarseth scheme is found to be inapplicable to higher-order integrators, and a more uniformly reliable criterion is proposed. 18 refs

  5. The stressor criterion for posttraumatic stress disorder: Does it matter?

    Science.gov (United States)

    Roberts, Andrea L.; Dohrenwend, Bruce P.; Aiello, Allison; Wright, Rosalind J.; Maercker, Andreas; Galea, Sandro; Koenen, Karestan C.

    2013-01-01

    Objective The definition of the stressor criterion for posttraumatic stress disorder (“Criterion A1”) is hotly debated with major revisions being considered for DSM-V. We examine whether symptoms, course, and consequences of PTSD vary predictably with the type of stressful event that precipitates symptoms. Method We used data from the 2009 PTSD diagnostic subsample (N=3,013) of the Nurses Health Study II. We asked respondents about exposure to stressful events qualifying under 1) DSM-III, 2) DSM-IV, or 3) not qualifying under DSM Criterion A1. Respondents selected the event they considered worst and reported subsequent PTSD symptoms. Among participants who met all other DSM-IV PTSD criteria, we compared distress, symptom severity, duration, impairment, receipt of professional help, and nine physical, behavioral, and psychiatric sequelae (e.g. physical functioning, unemployment, depression) by precipitating event group. Various assessment tools were used to determine fulfillment of PTSD Criteria B through F and to assess these 14 outcomes. Results Participants with PTSD from DSM-III events reported on average 1 more symptom (DSM-III mean=11.8 symptoms, DSM-IV=10.7, non-DSM=10.9) and more often reported symptoms lasted one year or longer compared to participants with PTSD from other groups. However, sequelae of PTSD did not vary systematically with precipitating event type. Conclusions Results indicate the stressor criterion as defined by the DSM may not be informative in characterizing PTSD symptoms and sequelae. In the context of ongoing DSM-V revision, these results suggest that Criterion A1 could be expanded in DSM-V without much consequence for our understanding of PTSD phenomenology. Events not considered qualifying stressors under the DSM produced PTSD as consequential as PTSD following DSM-III events, suggesting PTSD may be an aberrantly severe but nonspecific stress response syndrome. PMID:22401487

  6. A complete graphical criterion for the adjustment formula in mediation analysis.

    Science.gov (United States)

    Shpitser, Ilya; VanderWeele, Tyler J

    2011-03-04

    Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.

  7. Surrogate screening models for the low physical activity criterion of frailty.

    Science.gov (United States)

    Eckel, Sandrah P; Bandeen-Roche, Karen; Chaves, Paulo H M; Fried, Linda P; Louis, Thomas A

    2011-06-01

    Low physical activity, one of five criteria in a validated clinical phenotype of frailty, is assessed by a standardized, semiquantitative questionnaire on up to 20 leisure time activities. Because of the time demanded to collect the interview data, it has been challenging to translate to studies other than the Cardiovascular Health Study (CHS), for which it was developed. Considering subsets of activities, we identified and evaluated streamlined surrogate assessment methods and compared them to one implemented in the Women's Health and Aging Study (WHAS). Using data on men and women ages 65 and older from the CHS, we applied logistic regression models to rank activities by "relative influence" in predicting low physical activity.We considered subsets of the most influential activities as inputs to potential surrogate models (logistic regressions). We evaluated predictive accuracy and predictive validity using the area under receiver operating characteristic curves and assessed criterion validity using proportional hazards models relating frailty status (defined using the surrogate) to mortality. Walking for exercise and moderately strenuous household chores were highly influential for both genders. Women required fewer activities than men for accurate classification. The WHAS model (8 CHS activities) was an effective surrogate, but a surrogate using 6 activities (walking, chores, gardening, general exercise, mowing and golfing) was also highly predictive. We recommend a 6 activity questionnaire to assess physical activity for men and women. If efficiency is essential and the study involves only women, fewer activities can be included.

  8. A characterization of optimal portfolios under the tail mean-variance criterion

    OpenAIRE

    Owadally, I.; Landsman, Z.

    2013-01-01

    The tail mean–variance model was recently introduced for use in risk management and portfolio choice; it involves a criterion that focuses on the risk of rare but large losses, which is particularly important when losses have heavy-tailed distributions. If returns or losses follow a multivariate elliptical distribution, the use of risk measures that satisfy certain well-known properties is equivalent to risk management in the classical mean–variance framework. The tail mean–variance criterion...

  9. Role of optimization criterion in static asymmetric analysis of lumbar spine load.

    Science.gov (United States)

    Daniel, Matej

    2011-10-01

    A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.

  10. A convenient accuracy criterion for time domain FE-calculations

    DEFF Research Database (Denmark)

    Jensen, Morten Skaarup

    1997-01-01

    An accuracy criterion that is well suited to tome domain finite element (FE) calculations is presented. It is then used to develop a method for selecting time steps and element meshes that produce accurate results without significantly overburderning the computer. Use of this method is illustrated...... with a simple example, where comparison with an analytical solution shows that results are sufficiently accurate, which is not always the case with more primitive mthods for determining the discretisation....

  11. A Criterion for Stability of Synchronization and Application to Coupled Chua's Systems

    International Nuclear Information System (INIS)

    Wang Haixia; Lu Qishao; Wang Qingyun

    2009-01-01

    We investigate synchronization in an array network of nearest-neighbor coupled chaotic oscillators. By using of the Lyapunov stability theory and matrix theory, a criterion for stability of complete synchronization is deduced. Meanwhile, an estimate of the critical coupling strength is obtained to ensure achieving chaos synchronization. As an example application, a model of coupled Chua's circuits with linearly bidirectional coupling is studied to verify the validity of the criterion. (general)

  12. A work criterion for plastic collapse

    International Nuclear Information System (INIS)

    Muscat, Martin; Mackenzie, Donald; Hamilton, Robert

    2003-01-01

    A new criterion for evaluating limit and plastic loads in pressure vessel design by analysis is presented. The proposed criterion is based on the plastic work dissipated in the structure as loading progresses and may be used for structures subject to a single load or a combination of multiple loads. Example analyses show that limit and plastic loads given by the plastic work criterion are robust and consistent. The limit and plastic loads are determined purely by the inelastic response of the structure and are not influenced by the initial elastic response: a problem with some established plastic criteria

  13. Multiaxial fatigue criterion based on parameters from torsion and axial S-N curve

    Directory of Open Access Journals (Sweden)

    M. Margetin

    2016-07-01

    Full Text Available Multiaxial high cycle fatigue is a topic that concerns nearly all industrial domains. In recent years, a great deal of recommendations how to address problems with multiaxial fatigue life time estimation have been made and a huge progress in the field has been achieved. Until now, however, no universal criterion for multiaxial fatigue has been proposed. Addressing this situation, this paper offers a design of a new multiaxial criterion for high cycle fatigue. This criterion is based on critical plane search. Damage parameter consists of a combination of normal and shear stresses on a critical plane (which is a plane with maximal shear stress amplitude. Material parameters used in proposed criterion are obtained from torsion and axial S-N curves. Proposed criterion correctly calculates life time for boundary loading condition (pure torsion and pure axial loading. Application of proposed model is demonstrated on biaxial loading and the results are verified with testing program using specimens made from S355 steel. Fatigue material parameters for proposed criterion and multiple sets of data for different combination of axial and torsional loading have been obtained during the experiment.

  14. Some properties of the computable cross-norm criterion for separability

    International Nuclear Information System (INIS)

    Rudolph, Oliver

    2003-01-01

    The computable cross-norm (CCN) criterion is a powerful analytical and computable separability criterion for bipartite quantum states, which is also known to systematically detect bound entanglement. In certain aspects this criterion complements the well-known Peres positive partial transpose (PPT) criterion. In the present paper we study important analytical properties of the CCN criterion. We show that in contrast to the PPT criterion it is not sufficient in dimension 2x2. In higher dimensions, theorems connecting the fidelity of a quantum state with the CCN criterion are proved. We also analyze the behavior of the CCN criterion under local operations and identify the operations that leave it invariant. It turns out that the CCN criterion is in general not invariant under local operations

  15. Statistical criterion for Bubbly-slug flow transition

    Energy Technology Data Exchange (ETDEWEB)

    Zigler, J; Elias, E [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    The investigation of flow pattern transitions is still an interesting problem in the research of multiphase Row. It has been studied theoretically, and experimental confirmation of the models has been found by many investigators. The present paper deals with a statistical approach to bubbly-slug transitions in a vertical upward two phase flow and a new transition criterion is deduced from experimental data (authors).

  16. Simulation of selected genealogies.

    Science.gov (United States)

    Slade, P F

    2000-02-01

    Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.

  17. Discussion on verification criterion and method of human factors engineering for nuclear power plant controller

    International Nuclear Information System (INIS)

    Yang Hualong; Liu Yanzi; Jia Ming; Huang Weijun

    2014-01-01

    In order to prevent or reduce human error and ensure the safe operation of nuclear power plants, control device should be verified from the perspective of human factors engineering (HFE). The domestic and international human factors engineering guidelines about nuclear power plant controller were considered, the verification criterion and method of human factors engineering for nuclear power plant controller were discussed and the application examples were provided for reference in this paper. The results show that the appropriate verification criterion and method should be selected to ensure the objectivity and accuracy of the conclusion. (authors)

  18. On the Modified Barkhausen Criterion

    DEFF Research Database (Denmark)

    Lindberg, Erik; Murali, K.

    2016-01-01

    Oscillators are normally designed according to the Modified Barkhausen Criterion i.e. the complex pole pair is moved out in RHP so that the linear circuit becomes unstable. By means of the Mancini Phaseshift Oscillator it is demonstrated that the distortion of the oscillator may be minimized by i...... by introducing a nonlinear ”Hewlett Resistor” so that the complex pole-pair is in the RHP for small signals and in the LHP for large signals i.e. the complex pole pair of the instant linearized small signal model is moving around the imaginary axis in the complex frequency plane....

  19. Combining epidemiologic and biostatistical tools to enhance variable selection in HIV cohort analyses.

    Directory of Open Access Journals (Sweden)

    Christopher Rentsch

    Full Text Available BACKGROUND: Variable selection is an important step in building a multivariate regression model for which several methods and statistical packages are available. A comprehensive approach for variable selection in complex multivariate regression analyses within HIV cohorts is explored by utilizing both epidemiological and biostatistical procedures. METHODS: Three different methods for variable selection were illustrated in a study comparing survival time between subjects in the Department of Defense's National History Study and the Atlanta Veterans Affairs Medical Center's HIV Atlanta VA Cohort Study. The first two methods were stepwise selection procedures, based either on significance tests (Score test, or on information theory (Akaike Information Criterion, while the third method employed a Bayesian argument (Bayesian Model Averaging. RESULTS: All three methods resulted in a similar parsimonious survival model. Three of the covariates previously used in the multivariate model were not included in the final model suggested by the three approaches. When comparing the parsimonious model to the previously published model, there was evidence of less variance in the main survival estimates. CONCLUSIONS: The variable selection approaches considered in this study allowed building a model based on significance tests, on an information criterion, and on averaging models using their posterior probabilities. A parsimonious model that balanced these three approaches was found to provide a better fit than the previously reported model.

  20. Development of a Model for Dynamic Recrystallization Consistent with the Second Derivative Criterion

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2017-11-01

    Full Text Available Dynamic recrystallization (DRX processes are widely used in industrial hot working operations, not only to keep the forming forces low but also to control the microstructure and final properties of the workpiece. According to the second derivative criterion (SDC by Poliak and Jonas, the onset of DRX can be detected from an inflection point in the strain-hardening rate as a function of flow stress. Various models are available that can predict the evolution of flow stress from incipient plastic flow up to steady-state deformation in the presence of DRX. Some of these models have been implemented into finite element codes and are widely used for the design of metal forming processes, but their consistency with the SDC has not been investigated. This work identifies three sources of inconsistencies that models for DRX may exhibit. For a consistent modeling of the DRX kinetics, a new strain-hardening model for the hardening stages III to IV is proposed and combined with consistent recrystallization kinetics. The model is devised in the Kocks-Mecking space based on characteristic transition in the strain-hardening rate. A linear variation of the transition and inflection points is observed for alloy 800H at all tested temperatures and strain rates. The comparison of experimental and model results shows that the model is able to follow the course of the strain-hardening rate very precisely, such that highly accurate flow stress predictions are obtained.

  1. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    Science.gov (United States)

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  2. Effects of task-irrelevant grouping on visual selection in partial report

    DEFF Research Database (Denmark)

    Lunau, Rasmus; Habekost, Thomas

    2017-01-01

    and the selection criterion was luminance. This effect was accompanied by impaired selectivity in unsorted-color trials. Overall, the results suggest that the benefit of task-irrelevant color grouping of targets is contingent on the processing locus of the selection criterion....... the color of the elements in these trials. In the sorted-color condition, the color of the display elements was arranged according to the selection criterion, and in the unsorted-color condition, colors were randomly assigned. The distractor cost was inferred by subtracting performance in partial......-report trials from performance in a control condition that had no distractors in the display. Across five experiments, we manipulated trial order, selection criterion, and exposure duration, and found that attentional selectivity was improved in sorted-color trials when the exposure duration was 200 ms...

  3. The qualitative criterion of transient angle stability

    DEFF Research Database (Denmark)

    Lyu, R.; Xue, Y.; Xue, F.

    2015-01-01

    In almost all the literatures, the qualitative assessment of transient angle stability extracts the angle information of generators based on the swing curve. As the angle (or angle difference) of concern and the threshold value rely strongly on the engineering experience, the validity and robust...... of these criterions are weak. Based on the stability mechanism from the extended equal area criterion (EEAC) theory and combining with abundant simulations of real system, this paper analyzes the criterions in most literatures and finds that the results could be too conservative or too optimistic. It is concluded...

  4. Evaluation of probabilistic flow predictions in sewer systems using grey box models and a skill score criterion

    DEFF Research Database (Denmark)

    Thordarson, Fannar Ørn; Breinholt, Anders; Møller, Jan Kloppenborg

    2012-01-01

    term and a diffusion term, respectively accounting for the deterministic and stochastic part of the models. Furthermore, a distinction is made between the process noise and the observation noise. We compare five different model candidates’ predictive performances that solely differ with respect...... to the diffusion term description up to a 4 h prediction horizon by adopting the prediction performance measures; reliability, sharpness and skill score to pinpoint the preferred model. The prediction performance of a model is reliable if the observed coverage of the prediction intervals corresponds to the nominal...... coverage of the prediction intervals, i.e. the bias between these coverages should ideally be zero. The sharpness is a measure of the distance between the lower and upper prediction limits, and skill score criterion makes it possible to pinpoint the preferred model by taking into account both reliability...

  5. Geometric steering criterion for two-qubit states

    Science.gov (United States)

    Yu, Bai-Chu; Jia, Zhih-Ahn; Wu, Yu-Chun; Guo, Guang-Can

    2018-01-01

    According to the geometric characterization of measurement assemblages and local hidden state (LHS) models, we propose a steering criterion which is both necessary and sufficient for two-qubit states under arbitrary measurement sets. A quantity is introduced to describe the required local resources to reconstruct a measurement assemblage for two-qubit states. We show that the quantity can be regarded as a quantification of steerability and be used to find out optimal LHS models. Finally we propose a method to generate unsteerable states, and construct some two-qubit states which are entangled but unsteerable under all projective measurements.

  6. A Failure Criterion for Concrete

    DEFF Research Database (Denmark)

    Ottosen, N. S.

    1977-01-01

    A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...

  7. Physical and Constructive (Limiting) Criterions of Gear Wheels Wear

    Science.gov (United States)

    Fedorov, S. V.

    2018-01-01

    We suggest using a generalized model of friction - the model of elastic-plastic deformation of the body element, which is located on the surface of the friction pairs. This model is based on our new engineering approach to the problem of friction-triboergodynamics. Friction is examined as transformative and dissipative process. Structural-energetic interpretation of friction as a process of elasto-plastic deformation and fracture contact volumes is proposed. The model of Hertzian (heavy-loaded) friction contact evolution is considered. The least wear particle principle is formulated. It is mechanical (nano) quantum. Mechanical quantum represents the least structural form of solid material body in conditions of friction. It is dynamic oscillator of dissipative friction structure and it can be examined as the elementary nanostructure of metal’s solid body. At friction in state of most complete evolution of elementary tribosystem (tribocontact) all mechanical quanta (subtribosystems) with the exception of one, elasticity and reversibly transform energy of outer impact (mechanic movement). In these terms only one mechanical quantum is the lost - standard of wear. From this position we can consider the physical criterion of wear and the constructive (limiting) criterion of gear teeth and other practical examples of tribosystems efficiency with new tribology notion - mechanical (nano) quantum.

  8. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Directory of Open Access Journals (Sweden)

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  9. Use of the Niyama criterion to predict porosity of the mushy zone with deformation

    Directory of Open Access Journals (Sweden)

    S. Polyakov

    2011-10-01

    Full Text Available The article presents new results on the use of the Niyama criterion to estimate porosity appearance in castings under hindered shrinkage. The effect of deformation of the mushy zone on filtration is shown. A new form of the Niyama criterion accounting for the hindered shrinkage and the range of deformation localization has been obtained. The results of this study are illustrated by the examp le of the Niyama criterion calculated for Al-Cu alloys under different diffusion conditions of solidification and rate of deformation in the mushy zone. Derived equations can be used in a mathematical model of the casting solidification as well as for interpretation of the simulation results of casting solidification under hindered shrinkage. The presented study resulted in a new procedure of using the Niyama criterion under mushy zone deformation.

  10. Criterion of cleavage crack propagation and arrest in a nuclear PWR vessel steel

    International Nuclear Information System (INIS)

    Bousquet, Amaury

    2013-01-01

    The purpose of this PhD thesis is to understand physical mechanisms of cleavage crack propagation and arrest in the 16MND5 PWR vessel steel and to propose a robust predicting model based on a brittle fracture experimental campaign of finely instrumented laboratory specimens associated with numerical computations. First, experiments were carried out on thin CT25 specimens at five temperatures (-150 C, -125 C, -100 C, -7 C, -50 C). Two kinds of crack path, straight or branching path, have been observed. To characterize crack propagation and to measure crack speed, a high-speed framing camera system was used, combined with the development of an experimental protocol which allowed to observe CT surface without icing inside the thermal chamber and on the specimen. The framing camera (520 000 fps) has allowed to have a very accurate estimation of crack speed on the complete ligament of CT (∼ 25 mm). Besides, to analyse experiments and to study the impact of viscosity on the mechanical response around the crack tip, the elastic-viscoplastic behavior of the ferritic steel has been studied up to a strain rate of 104 s -1 for the tested temperatures.The extended Finite Element Method (X-FEM) was used in CAST3M FE software to model crack propagation. Numerical computations combine a local non linear dynamic approach with a RKR type fracture stress criterion to a characteristic distance. The work carried out has confirmed the form of the criterion proposed by Prabel at -125 C, and has identified the dependencies of the criterion on temperature and strain rate. From numerical analyzes in 2D and 3D, a multi-temperature fracture stress criterion, increasing function of the strain rate, was proposed. Predictive modeling were used to confirm the identified criterion on two specimen geometries (CT and compressive ring) in mode I at different temperatures. SEM observations and 3D analyzes made with optical microscope showed that the fracture mechanism was the cleavage associated

  11. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    Science.gov (United States)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  12. The EMU debt criterion: an interpretation

    Directory of Open Access Journals (Sweden)

    R. BERNDSEN

    1997-12-01

    Full Text Available The convergence criteria specified in the Maastricht Treaty on government deficit and debt, inflation, the exchange rate and the long-term interest rate will play an important, if not decisive, role in determining which countries move on to the third stage of the Economic and Monetary Union (EMU. The aim of this work is to provide a possible interpretation of the EMU debt criterion. The author investigates the government debt criterion which, as Article 104c(2b of the Treaty shows, has a considerable scope for interpretation. Although this subject has been discussed extensively, relatively little work has been done to develop a clear interpretation of the EMU debt criterion. A flexible approach is adopted in which parts of the relevant Treaty text are characterised using two parameters.

  13. Relative criterion for validity of a semiclassical approach to the dynamics near quantum critical points.

    Science.gov (United States)

    Wang, Qian; Qin, Pinquan; Wang, Wen-ge

    2015-10-01

    Based on an analysis of Feynman's path integral formulation of the propagator, a relative criterion is proposed for validity of a semiclassical approach to the dynamics near critical points in a class of systems undergoing quantum phase transitions. It is given by an effective Planck constant, in the relative sense that a smaller effective Planck constant implies better performance of the semiclassical approach. Numerical tests of this relative criterion are given in the XY model and in the Dicke model.

  14. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  15. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  16. A pellet-clad interaction failure criterion

    International Nuclear Information System (INIS)

    Howl, D.A.; Coucill, D.N.; Marechal, A.J.C.

    1983-01-01

    A Pellet-Clad Interaction (PCI) failure criterion, enabling the number of fuel rod failures in a reactor core to be determined for a variety of normal and fault conditions, is required for safety analysis. The criterion currently being used for the safety analysis of the Pressurized Water Reactor planned for Sizewell in the UK is defined and justified in this paper. The criterion is based upon a threshold clad stress which diminishes with increasing fast neutron dose. This concept is consistent with the mechanism of clad failure being stress corrosion cracking (SCC); providing excess corrodant is always present, the dominant parameter determining the propagation of SCC defects is stress. In applying the criterion, the SLEUTH-SEER 77 fuel performance computer code is used to calculate the peak clad stress, allowing for concentrations due to pellet hourglassing and the effect of radial cracks in the fuel. The method has been validated by analysis of PCI failures in various in-reactor experiments, particularly in the well-characterised power ramp tests in the Steam Generating Heavy Water Reactor (SGHWR) at Winfrith. It is also in accord with out-of-reactor tests with iodine and irradiated Zircaloy clad, such as those carried out at Kjeller in Norway. (author)

  17. Nonparametric adaptive age replacement with a one-cycle criterion

    International Nuclear Information System (INIS)

    Coolen-Schrijner, P.; Coolen, F.P.A.

    2007-01-01

    Age replacement of technical units has received much attention in the reliability literature over the last four decades. Mostly, the failure time distribution for the units is assumed to be known, and minimal costs per unit of time is used as optimality criterion, where renewal reward theory simplifies the mathematics involved but requires the assumption that the same process and replacement strategy continues over a very large ('infinite') period of time. Recently, there has been increasing attention to adaptive strategies for age replacement, taking into account the information from the process. Although renewal reward theory can still be used to provide an intuitively and mathematically attractive optimality criterion, it is more logical to use minimal costs per unit of time over a single cycle as optimality criterion for adaptive age replacement. In this paper, we first show that in the classical age replacement setting, with known failure time distribution with increasing hazard rate, the one-cycle criterion leads to earlier replacement than the renewal reward criterion. Thereafter, we present adaptive age replacement with a one-cycle criterion within the nonparametric predictive inferential framework. We study the performance of this approach via simulations, which are also used for comparisons with the use of the renewal reward criterion within the same statistical framework

  18. Novel criterion for formation of metastable phase from undercooled melt

    International Nuclear Information System (INIS)

    Kuribayashi, Kazuhiko; Nagashio, Kosuke; Niwata, Kenji; Kumar, M.S. Vijaya; Hibiya, Taketoshi

    2007-01-01

    Undercooling a melt facilitates the preferential nucleation of a metastable phase. In the present study, the formation of metastable phases from undercooled melts was considered from the viewpoint of the competitive nucleation criterion. The classical nucleation theory shows us that the most critical factor for forming a critical nucleus is the interface free energy σ. Furthermore, Spaepen's negentropic model on σ generated the role of the scaling factor α that depends on the polyhedral order in the liquid and solid phases prominent in simple liquids such as the melt of monoatomic metals. In ionic materials such as oxides, however, in which oxygen polyhedrons including a cation at their center are the structural units both in the solid and liquid phases, the entropy of fusion, rather than α, can be expected to become dominant in the determination of σ. In accordance with this idea, using REFeO 3 as the model material (where RE denotes rare-earth elements) the entropy-undercooling regime criterion was proposed and verified

  19. Psychometric aspects of item mapping for criterion-referenced interpretation and bookmark standard setting.

    Science.gov (United States)

    Huynh, Huynh

    2010-01-01

    Locating an item on an achievement continuum (item mapping) is well-established in technical work for educational/psychological assessment. Applications of item mapping may be found in criterion-referenced (CR) testing (or scale anchoring, Beaton and Allen, 1992; Huynh, 1994, 1998a, 2000a, 2000b, 2006), computer-assisted testing, test form assembly, and in standard setting methods based on ordered test booklets. These methods include the bookmark standard setting originally used for the CTB/TerraNova tests (Lewis, Mitzel, Green, and Patz, 1999), the item descriptor process (Ferrara, Perie, and Johnson, 2002) and a similar process described by Wang (2003) for multiple-choice licensure and certification examinations. While item response theory (IRT) models such as the Rasch and two-parameter logistic (2PL) models traditionally place a binary item at its location, Huynh has argued in the cited papers that such mapping may not be appropriate in selecting items for CR interpretation and scale anchoring.

  20. ADDED VALUE AS EFFICIENCY CRITERION FOR INDUSTRIAL PRODUCTION PROCESS

    Directory of Open Access Journals (Sweden)

    L. M. Korotkevich

    2016-01-01

    Full Text Available Literary analysis has shown that the majority of researchers are using classical efficiency criteria for construction of an optimization model for production process: profit maximization; cost minimization; maximization of commercial product output; minimization of back-log for product demand; minimization of total time consumption due to production change. The paper proposes to use an index of added value as an efficiency criterion because it combines economic and social interests of all main interested subjects of the business activity: national government, property owners, employees, investors. The following types of added value have been considered in the paper: joint-stock, market, monetary, economic, notional (gross, net, real. The paper makes suggestion to use an index of real value added as an efficiency criterion. Such approach permits to bring notional added value in comparable variant because added value can be increased not only due to efficiency improvement of enterprise activity but also due to environmental factors – excess in rate of export price increases over rate of import growth. An analysis of methods for calculation of real value added has been made on a country-by-country basis (extrapolation, simple and double deflation. A method of double deflation has been selected on the basis of the executed analysis and it is counted according to the Laspeyires, Paasche, Fischer indices. A conclusion has been made that the used expressions do not take into account fully economic peculiarities of the Republic of Belarus: they are considered as inappropriate in the case when product cost is differentiated according to marketing outlets; they do not take account of difference in rate of several currencies and such approach is reflected in export price of a released product and import price for raw material, supplies and component parts. Taking this into consideration expressions for calculation of real value added have been specified

  1. Criterion of magnetic saturation and simulation of nonlinear magnetization for a linear multi-core pulse transformer

    International Nuclear Information System (INIS)

    Zeng Zhengzhong; Kuai Bin; Sun Fengju; Cong Peitian; Qiu Aici

    2002-01-01

    The linear multi-core pulse transformer is an important primary driving source used in pulsed power apparatus for the production of dense plasm owing to its compact, relatively low-cost and easy-to-handle characteristics. The evaluation of the magnetic saturation of the transformer cores is essential to the transformer design, because the energy transfer efficiency of the transformer will degrade significantly after magnetic saturation. This work proposes analytical formulas of the criterion of magnetic saturation for the cores when the transformer drives practical loads. Furthermore, an electric circuit model based on a dependent source treatment for simulating the electric behavior of the cores related to their nonlinear magnetization is developed using the initial magnetization curve of the cores. The numerical simulation with the model is used to evaluate the validity of the criterion. Both the criterion and the model are found to be in agreement with the experimental data

  2. Relationships between Classroom Schedule Types and Performance on the Algebra I Criterion-Referenced Test

    Science.gov (United States)

    Murray, Gregory V.; Moyer-Packenham, Patricia S.

    2014-01-01

    One option for length of individual mathematics class periods is the schedule type selected for Algebra I classes. This study examined the relationship between student achievement, as indicated by Algebra I Criterion-Referenced Test scores, and the schedule type for Algebra I classes. Data obtained from the Utah State Office of Education included…

  3. Reactor instrumentation. Definition of the single failure criterion

    International Nuclear Information System (INIS)

    1980-12-01

    The standard defines the single failure criterion which is used in other IEC publications on reactor safety systems. The purpose of the single failure criterion is the assurance of minimum redundancy. (orig./HP) [de

  4. Numerical and Experimental Validation of a New Damage Initiation Criterion

    NARCIS (Netherlands)

    Sadhinoch, M.; Atzema, E.H.; Perdahcioglu, E.S.; Van Den Boogaard, A.H.

    2017-01-01

    Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this

  5. The AP diameter of the pelvis: a new criterion for continence in the exstrophy complex?

    International Nuclear Information System (INIS)

    Ait-Ameur, A.; Kalifa, G.; Adamsbaum, C.; Wakim, A.; Dubousset, J.

    2001-01-01

    Reconstructive surgery of bladder exstrophy remains a challenge. By using CT of the pelvis, we suggest a new pre- and post-operative investigative procedure to define the AP diameter (APD) as a predictive criterion for continence in this anomaly. Patients and methods: Three axial CT slices were selected in nine children with exstrophy who had undergone neonatal reconstructive surgery. The three levels selected were the first sacral plate, the mid acetabular plane and the superior pubic spine. We used combined slices to measure: circle APD = distance between the first sacral vertebra and the pubic symphysis. circle Pubic diastasis (PD) circle Three angles defined on the transverse plane of the first sacral vertebra - iliac wing angle, sacropubic angle and acetabular version. In exstrophy, the angles demonstrate opening of the iliac wings and the pubic ramus, and acetabular retroversion compared to controls. Comparisons between controls, continent and incontinent patients reveal that in continent patients, APD increases with growth and seems to be a predictive criterion for continence, independent of diastasis of the pubic symphysis. We believe that CT of the pelvis with measurements of the APD should be performed in all neonates with bladder exstrophy before reconstructive surgery and for better understanding of the malformation. The APD seems to be predictive and may be a major criterion for continence, independent of PD. (orig.)

  6. Modified Schur-Cohn Criterion for Stability of Delayed Systems

    Directory of Open Access Journals (Sweden)

    Juan Ignacio Mulero-Martínez

    2015-01-01

    Full Text Available A modified Schur-Cohn criterion for time-delay linear time-invariant systems is derived. The classical Schur-Cohn criterion has two main drawbacks; namely, (i the dimension of the Schur-Cohn matrix generates some round-off errors eventually resulting in a polynomial of s with erroneous coefficients and (ii imaginary roots are very hard to detect when numerical errors creep in. In contrast to the classical Schur-Cohn criterion an alternative approach is proposed in this paper which is based on the application of triangular matrices over a polynomial ring in a similar way as in the Jury test of stability for discrete systems. The advantages of the proposed approach are that it halves the dimension of the polynomial and it only requires seeking real roots, making this modified criterion comparable to the Rekasius substitution criterion.

  7. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  8. Inferring phylogenetic networks by the maximum parsimony criterion: a case study.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2007-01-01

    Horizontal gene transfer (HGT) may result in genes whose evolutionary histories disagree with each other, as well as with the species tree. In this case, reconciling the species and gene trees results in a network of relationships, known as the "phylogenetic network" of the set of species. A phylogenetic network that incorporates HGT consists of an underlying species tree that captures vertical inheritance and a set of edges which model the "horizontal" transfer of genetic material. In a series of papers, Nakhleh and colleagues have recently formulated a maximum parsimony (MP) criterion for phylogenetic networks, provided an array of computationally efficient algorithms and heuristics for computing it, and demonstrated its plausibility on simulated data. In this article, we study the performance and robustness of this criterion on biological data. Our findings indicate that MP is very promising when its application is extended to the domain of phylogenetic network reconstruction and HGT detection. In all cases we investigated, the MP criterion detected the correct number of HGT events required to map the evolutionary history of a gene data set onto the species phylogeny. Furthermore, our results indicate that the criterion is robust with respect to both incomplete taxon sampling and the use of different site substitution matrices. Finally, our results show that the MP criterion is very promising in detecting HGT in chimeric genes, whose evolutionary histories are a mix of vertical and horizontal evolution. Besides the performance analysis of MP, our findings offer new insights into the evolution of 4 biological data sets and new possible explanations of HGT scenarios in their evolutionary history.

  9. FFTBM and primary pressure acceptance criterion

    International Nuclear Information System (INIS)

    Prosek, A.

    2004-01-01

    When thermalhydraulic computer codes are used for simulation in the area of nuclear engineering the question is how to conduct an objective comparison between the code calculation and measured data. To answer this the fast Fourier transform based method (FFTBM) was developed. When the FFTBM method was developed the acceptance criteria for primary pressure and total accuracy were set. In the recent study the FFTBM method was used for accuracy quantification of RD-14M large LOCA test B9401 calculations. The blind accuracy analysis indicated good total accuracy while the primary pressure criterion was not fulfilled. The objective of the study was therefore to investigate the reasons for not fulfilling the primary pressure acceptance criterion and the applicability of the criterion to experimental facilities simulating heavy water reactor. The results of the open quantitative analysis showed that sensitivity analysis for influence parameters provide sufficient information to judge in which calculation the accuracy of primary pressure is acceptable. (author)

  10. Extensions and applications of the Bohm criterion

    Science.gov (United States)

    Baalrud, Scott D.; Scheiner, Brett; Yee, Benjamin; Hopkins, Matthew; Barnat, Edward

    2015-04-01

    The generalized Bohm criterion is revisited in the context of incorporating kinetic effects of the electron and ion distribution functions into the theory. The underlying assumptions and results of two different approaches are compared: the conventional ‘kinetic Bohm criterion’ and a fluid-moment hierarchy approach. The former is based on the asymptotic limit of an infinitely thin sheath (λD/l = 0), whereas the latter is based on a perturbative expansion of a sheath that is thin compared to the plasma (λD/l ≪ 1). Here λD is the Debye length, which characterizes the sheath length scale, and l is a measure of the plasma or presheath length scale. The consequences of these assumptions are discussed in terms of how they restrict the class of distribution functions to which the resulting criteria can be applied. Two examples are considered to provide concrete comparisons between the two approaches. The first is a Tonks-Langmuir model including a warm ion source (Robertson 2009 Phys. Plasmas 16 103503). This highlights a substantial difference between the conventional kinetic theory, which predicts slow ions dominate at the sheath edge, and the fluid moment approach, which predicts slow ions have little influence. The second example considers planar electrostatic probes biased near the plasma potential using model equations and particle-in-cell simulations. This demonstrates a situation where electron kinetic effects alter the Bohm criterion, leading to a subsonic ion flow at the sheath edge.

  11. Reliability criteria selection for integrated resource planning

    International Nuclear Information System (INIS)

    Ruiu, D.; Ye, C.; Billinton, R.; Lakhanpal, D.

    1993-01-01

    A study was conducted on the selection of a generating system reliability criterion that ensures a reasonable continuity of supply while minimizing the total costs to utility customers. The study was conducted using the Institute for Electronic and Electrical Engineers (IEEE) reliability test system as the study system. The study inputs and results for conditions and load forecast data, new supply resources data, demand-side management resource data, resource planning criterion, criterion value selection, supply side development, integrated resource development, and best criterion values, are tabulated and discussed. Preliminary conclusions are drawn as follows. In the case of integrated resource planning, the selection of the best value for a given type of reliability criterion can be done using methods similar to those used for supply side planning. The reliability criteria values previously used for supply side planning may not be economically justified when integrated resource planning is used. Utilities may have to revise and adopt new, and perhaps lower supply reliability criteria for integrated resource planning. More complex reliability criteria, such as energy related indices, which take into account the magnitude, frequency and duration of the expected interruptions are better adapted than the simpler capacity-based reliability criteria such as loss of load expectation. 7 refs., 5 figs., 10 tabs

  12. Angular criterion for distinguishing between Fraunhofer and Fresnel diffraction

    International Nuclear Information System (INIS)

    Medina, Francisco F.; Garcia-Sucerquia, Jorge; Castaneda, Roman; Matteucci, Giorgio

    2003-03-01

    The distinction between Fresnel and Fraunhofer diffraction is a crucial condition for the accurate analysis of diffracting structures. In this paper we propose a criterion based on the angle subtended by the first zero of the diffraction pattern from the center of the diffracting aperture. The determination of the zero of the diffraction pattern is the crucial point for assuring the precision of the criterion. It mainly depends on the dynamical range of the detector. Therefore, the applicability of adequate thresholds for different detector types is discussed. The criterion is also generalized by expressing it in terms of the number of Fresnel zones delimited by the aperture. Simulations are reported for illustrating the feasibility of the criterion. (author)

  13. Nucleation of recrystallization in fine-grained materials: an extension of the Bailey-Hirsch criterion

    Science.gov (United States)

    Favre, Julien; Fabrègue, Damien; Chiba, Akihiko; Bréchet, Yves

    2013-11-01

    A new criterion for nucleation in the case of dynamic recrystallization is proposed in order to include the contribution of the grain boundary energy stored in the microstructure in the energy balance. Due to the nucleation events, the total surface area of pre-existing grain boundaries decreases, leading to a nucleus size smaller than expected by conventional nucleation criteria. The new model provides a better prediction of the nucleus size during recrystallization of pure copper compared with the conventional nucleation criterion.

  14. The System of Objectified Judgement Analysis (SOJA). A tool in rational drug selection for formulary inclusion.

    Science.gov (United States)

    Janknegt, R; Steenhoek, A

    1997-04-01

    Rational drug selection for formulary purposes is important. Besides rational selection criteria, other factors play a role in drug decision making, such as emotional, personal financial and even unconscious criteria. It is agreed that these factors should be excluded as much as possible in the decision making process. A model for drug decision making for formulary purposes is described, the System of Objectified Judgement Analysis (SOJA). In the SOJA method, selection criteria for a given group of drugs are prospectively defined and the extent to which each drug fulfils the requirements for each criterion is determined. Each criterion is given a relative weight, i.e. the more important a given selection criterion is considered, the higher the relative weight. Both the relative scores for each drug per selection criterion and the relative weight of each criterion are determined by a panel of experts in this field. The following selection criteria are applied in all SOJA scores: clinical efficacy, incidence and severity of adverse effects, dosage frequency, drug interactions, acquisition cost, documentation, pharmacokinetics and pharmaceutical aspects. Besides these criteria, group specific criteria are also used, such as development of resistance when a SOJA score was made for antimicrobial agents. The relative weight that is assigned to each criterion will always be a subject of discussion. Therefore, interactive software programs for use on a personal computer have been developed, in which the user of the system may enter their own personal relative weight to each selection criterion and make their own personal SOJA score. The main advantage of the SOJA method is that all nonrational selection criteria are excluded and that drug decision making is based solely on rational criteria. The use of the interactive SOJA discs makes the decision process fully transparent as it becomes clear on which criteria and weighting decisions are based. We have seen that the use of

  15. An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species

    International Nuclear Information System (INIS)

    Hatami, M. M.

    2015-01-01

    The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, it reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically

  16. Selection Ideal Coal Suppliers of Thermal Power Plants Using the Matter-Element Extension Model with Integrated Empowerment Method for Sustainability

    Directory of Open Access Journals (Sweden)

    Zhongfu Tan

    2014-01-01

    Full Text Available In order to reduce thermal power generation cost and improve its market competitiveness, considering fuel quality, cost, creditworthiness, and sustainable development capacity factors, this paper established the evaluation system for coal supplier selection of thermal power and put forward the coal supplier selection strategies for thermal power based on integrated empowering and ideal matter-element extension models. On the one hand, the integrated empowering model can overcome the limitations of subjective and objective methods to determine weights, better balance subjective, and objective information. On the other hand, since the evaluation results of the traditional element extension model may fall into the same class and only get part of the order results, in order to overcome this shortcoming, the idealistic matter-element extension model is constructed. It selects the ideal positive and negative matter-elements classical field and uses the closeness degree to replace traditional maximum degree of membership criterion and calculates the positive or negative distance between the matter-element to be evaluated and the ideal matter-element; then it can get the full order results of the evaluation schemes. Simulated and compared with the TOPSIS method, Romania selection method, and PROMETHEE method, numerical example results show that the method put forward by this paper is effective and reliable.

  17. Geoscience Education and Public Outreach AND CRITERION 2: MAKING A BROADER IMPACT

    Science.gov (United States)

    Marlino, M.; Scotchmoor, J. G.

    2005-12-01

    The geosciences influence our daily lives and yet often go unnoticed by the general public. From the moment we listen to the weather report and fill-up our cars for the daily commute, until we return to our homes constructed from natural resources, we rely on years of scientific research. The challenge facing the geosciences is to make explicit to the public not only the criticality of the research whose benefits they enjoy, but also to actively engage them as partners in the research effort, by providing them with sufficient understanding of the scientific enterprise so that they become thoughtful and proactive when making decisions in the polling booth. Today, there is broad recognition within the science and policy community that communication needs to be more effective, more visible, and that the public communication of the scientific enterprise is critical not only to its taxpayer support, but also to maintenance of a skilled workforce and the standard of living expected by many Americans. In 1997, the National Science Board took the first critical step in creating a cultural change in the scientific community by requiring explicit consideration of the broader impacts of research in every research proposal. The so-called Criterion 2 has catalyzed a dramatic shift in expectations within the geoscience community and an incentive for finding ways to encourage the science research community to select education and public outreach as a venue for responding to Criterion 2. In response, a workshop organized by the University of California Museum of Paleontology and the Digital Library for Earth System Education (DLESE) was held on the Berkeley campus May 11-13, 2005. The Geoscience EPO Workshop purposefully narrowed its focus to that of education and public outreach. This workshop was based on the premise that there are proven models and best practices for effective outreach strategies that need to be identified and shared with research scientists. Workshop

  18. A proposed risk acceptance criterion for nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Mehta, K.

    1985-06-01

    The need to establish a radiological protection criterion that applies specifically to disposal of high level nuclear fuel wastes arises from the difficulty of applying the present ICRP recommendations. These recommendations apply to situations in which radiological detriment can be actively controlled, while a permanent waste disposal facility is meant to operate without the need for corrective actions. Also, the risks associated with waste disposal depend on events and processes that have various probabilities of occurrence. In these circumstances, it is not suitable to apply standards that are based on a single dose limit as in the present ICRP recommendations, because it will generally be possible to envisage events, perhaps rare, that would lead to doses above any selected limit. To overcome these difficulties, it is proposed to base a criterion for acceptability on a set of dose values and corresponding limiting values of probabilities; this set of values constitutes a risk-limit line. A risk-limit line suitable for waste disposal is proposed that has characteristics consistent with the basic philosophy of the ICRP and UNSCEAR recommendations, and is based on levels on natural background radiation

  19. Criterion for testing multiparticle negative-partial-transpose entanglement

    International Nuclear Information System (INIS)

    Zeng, B.; Zhou, D.L.; Zhang, P.; Xu, Z.; You, L.

    2003-01-01

    We revisit the criterion of multiparticle entanglement based on the overlaps of a given quantum state ρ with maximally entangled states. For a system of m particles, each with N distinct states, we prove that ρ is m-particle negative partial transpose entangled, if there exists a maximally entangled state vertical bar MES>, such that >1/N. While this sufficiency condition is weaker than the Peres-Horodecki criterion in all cases, it applies to multi-particle systems, and becomes especially useful when the number of particles (m) is large. We also consider the converse of this criterion and illustrate its invalidity with counter examples

  20. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  1. Proposing a model for safety risk assessment in the construction industry using gray multi-criterion decision-making

    Directory of Open Access Journals (Sweden)

    S. M. Abootorabi

    2014-09-01

    Full Text Available Introduction: Statistical Report of the Social Security Organization indicate that among the various industries, the construction industry has the highest number of work-related accidents so that in addition to frequency, it has high intensity, as well. On the other hand, a large number of human resources are working in this whish shows they necessity for paying special attention to these workers. Therefore, risk assessment of the safety in the construction industry is an effective step in this regard. In this study, a method for ranking safety risks in conditions of low number of samples and uncertainty is presented, using gray multi-criterion decision-making. .Material and Method: In this study, we first identified the factors affecting the occurrence of hazards in the construction industry. Then, appropriate for ranking the risks were determined and the problem was defined as a multi-criterion decision-making. In order to weight the criteria and to evaluate alternatives based on each criterion, gray numbers were used. In the last stage, the problem was solved using the gray possibility degree. .Results: The results show that the method of gray multi-criterion decision-making is an effective method for ranking risks in situations of low samples compared with other methods of MCDM. .Conclusion: The proposed method is preferred to fuzzy methods and statistics in uncertain and low sample size, due to simple calculations and no need to define the membership function.

  2. Design of Biomass Combined Heat and Power (CHP Systems based on Economic Risk using Minimax Regret Criterion

    Directory of Open Access Journals (Sweden)

    Ling Wen Choong

    2018-01-01

    Full Text Available It is a great challenge to identify optimum technologies for CHP systems that utilise biomass and convert it into heat and power. In this respect, industry decision makers are lacking in confidence to invest in biomass CHP due to economic risk from varying energy demand. This research work presents a linear programming systematic framework to design biomass CHP system based on potential loss of profit due to varying energy demand. Minimax Regret Criterion (MRC approach was used to assess maximum regret between selections of the given biomass CHP design based on energy demand. Based on this, the model determined an optimal biomass CHP design with minimum regret in economic opportunity. As Feed-in Tariff (FiT rates affects the revenue of the CHP plant, sensitivity analysis was then performed on FiT rates on the selection of biomass CHP design. Besides, design analysis on the trend of the optimum design selected by model was conducted. To demonstrate the proposed framework in this research, a case study was solved using the proposed approach. The case study focused on designing a biomass CHP system for a palm oil mill (POM due to large energy potential of oil palm biomass in Malaysia.

  3. A risk-based microbiological criterion that uses the relative risk as the critical limit

    DEFF Research Database (Denmark)

    Andersen, Jens Kirk; Nørrung, Birgit; da Costa Alves Machado, Simone

    2015-01-01

    A risk-based microbiological criterion is described, that is based on the relative risk associated to the analytical result of a number of samples taken from a food lot. The acceptable limit is a specific level of risk and not a specific number of microorganisms, as in other microbiological...... criteria. The approach requires the availability of a quantitative microbiological risk assessment model to get risk estimates for food products from sampled food lots. By relating these food lot risk estimates to the mean risk estimate associated to a representative baseline data set, a relative risk...... estimate can be obtained. This relative risk estimate then can be compared with a critical value, defined by the criterion. This microbiological criterion based on a relative risk limit is particularly useful when quantitative enumeration data are available and when the prevalence of the microorganism...

  4. Sperm head's birefringence: a new criterion for sperm selection.

    Science.gov (United States)

    Gianaroli, Luca; Magli, M Cristina; Collodel, Giulia; Moretti, Elena; Ferraretti, Anna P; Baccetti, Baccio

    2008-07-01

    To investigate the characteristics of birefringence in human sperm heads and apply polarization microscopy for sperm selection at intracytoplasmic sperm injection (ICSI). Prospective randomized study. Reproductive Medicine Unit, Società Italiana Studi Medicina della Riproduzione, Bologna, Italy. A total of 112 male patients had birefringent sperm selected for ICSI (study group). The clinical outcome was compared with that obtained in 119 couples who underwent a conventional ICSI cycle (control group). The proportion of birefringent spermatozoa was evaluated before and after treatment in relation to the sperm sample quality. Embryo development and clinical outcome in the study group were compared with those in the controls. Proportion of birefringent sperm heads, rates of fertilization, cleavage, pregnancy, implantation, and ongoing implantation. The proportion of birefringent spermatozoa was significantly higher in normospermic samples when compared with oligoasthenoteratospermic samples with no progressive motility and testicular sperm extraction samples. Although fertilization and cleavage rates did not differ between the study and control groups, in the most severe male factor condition (oligoasthenoteratospermic with no progressive motility and testicular sperm extraction), the rates of clinical pregnancy, ongoing pregnancy, and implantation were significantly higher in the study group versus the controls. The analysis of birefringence in the sperm head could represent both a diagnostic tool and a novel method for sperm selection.

  5. Two novel synchronization criterions for a unified chaotic system

    International Nuclear Information System (INIS)

    Tao Chaohai; Xiong Hongxia; Hu Feng

    2006-01-01

    Two novel synchronization criterions are proposed in this paper. It includes drive-response synchronization and adaptive synchronization schemes. Moreover, these synchronization criterions can be applied to a large class of chaotic systems and are very useful for secure communication

  6. Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion

    Directory of Open Access Journals (Sweden)

    Jiandong Duan

    2018-02-01

    Full Text Available In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM model with a maximum correntropy criterion (MCC to forecast the electricity consumption (EC. Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP, temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model.

  7. Selection of an appropriately simple storm runoff model

    Directory of Open Access Journals (Sweden)

    A. I. J. M. van Dijk

    2010-03-01

    Full Text Available An appropriately simple event runoff model for catchment hydrological studies was derived. The model was selected from several variants as having the optimum balance between simplicity and the ability to explain daily observations of streamflow from 260 Australian catchments (23–1902 km2. Event rainfall and runoff were estimated from the observations through a combination of baseflow separation and storm flow recession analysis, producing a storm flow recession coefficient (kQF. Various model structures with up to six free parameters were investigated, covering most of the equations applied in existing lumped catchment models. The performance of alternative structures and free parameters were expressed in Aikake's Final Prediction Error Criterion (FPEC and corresponding Nash-Sutcliffe model efficiencies (NSME for event runoff totals. For each model variant, the number of free parameters was reduced in steps based on calculated parameter sensitivity. The resulting optimal model structure had two or three free parameters; the first describing the non-linear relationship between event rainfall and runoff (Smax, the second relating runoff to antecedent groundwater storage (CSg, and a third that described initial rainfall losses (Li, but which could be set at 8 mm without affecting model performance too much. The best three parameter model produced a median NSME of 0.64 and outperformed, for example, the Soil Conservation Service Curve Number technique (median NSME 0.30–0.41. Parameter estimation in ungauged catchments is likely to be challenging: 64% of the variance in kQF among stations could be explained by catchment climate indicators and spatial correlation, but corresponding numbers were a modest 45% for CSg, 21% for Smax and none for Li, respectively. In gauged catchments, better

  8. Corner-point criterion for assessing nonlinear image processing imagers

    Science.gov (United States)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  9. Blind equalization with criterion with memory nonlinearity

    Science.gov (United States)

    Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.

    1992-06-01

    Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.

  10. Importance biasing quality criterion based on contribution response theory

    International Nuclear Information System (INIS)

    Borisov, N.M.; Panin, M.P.

    2001-01-01

    The report proposes a visual criterion of importance biasing both of forward and adjoint simulation. The similarity of contribution Monte Carlo and importance biasing random collision event distribution is proved. The conservation of total number of random trajectory crossings of surfaces, which separate the source and the detector is proposed as importance biasing quality criterion. The use of this criterion is demonstrated on the example of forward vs. adjoint importance biasing in gamma ray deep penetration problem. The larger amount of published data on forward field characteristics than on adjoint leads to the more accurate approximation of adjoint importance function in comparison to forward, for it adjoint importance simulation is more effective than forward. The proposed criterion indicates it visually, showing the most uniform distribution of random trajectory crossing events for the most effective importance biasing parameters and pointing to the direction of tuning importance biasing parameters. (orig.)

  11. Criterion of damage beginning: experimental identification for laminate composite

    International Nuclear Information System (INIS)

    Thiebaud, F.; Perreux, D.; Varchon, D.; Lebras, J.

    1996-01-01

    The aim of this study is to propose a criterion of damage beginning for laminate composite. The materials is a glass-epoxy laminate [+55 deg.,-55 deg.[ n performed by winding filament process. First of all a description of the damage is performed and allows to define a damage variable. Thanks to the potential of free energy, an associated variable is defined. The damage criterion is written by using this last one. The parameter of the criterion is identified using mechanical and acoustical methods. The result is compared and exhibit a good agreement. (authors). 13 refs., 5 figs

  12. Location Selection for Hardboard Industry in Mazandaran Province

    Directory of Open Access Journals (Sweden)

    Majid Azizi

    2012-01-01

    Full Text Available This research presents an optimum framework for hardboard industry location selection in Mazandaran Province. Considering the existence of only two depreciated hardboard plants with very old technology in Iran, the establishment of new plants are vital. To materialize this goal, Mazandaran province enjoys priorities to other provinces based on its resources of raw lignocellulosic materials required for wood and paper industries. The model presented in this article uses AHP benefit/cost ratios. The results indicate that the criterion of ‘material and production’ with a weight of 0.327 and the sub-criterion of ‘reliability of supply’ with a weight of 0.146 have the highest priorities, and the city of Sari is the best alternative.

  13. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-01-01

    proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a

  14. General stability criterion for inviscid parallel flow

    International Nuclear Information System (INIS)

    Sun Liang

    2007-01-01

    Arnol'd's second stability theorem is approached from an elementary point of view. First, a sufficient criterion for stability is found analytically as either -μ 1 s ) s ) in the flow, where U s is the velocity at the inflection point, and μ 1 is the eigenvalue of Poincare's problem. Second, this criterion is generalized to barotropic geophysical flows in the β plane. And the connections between present criteria and Arnol'd's nonlinear criteria are also discussed. The proofs are completely elementary and so could be used to teach undergraduate students

  15. The Concept of Performance Levels in Criterion-Referenced Assessment.

    Science.gov (United States)

    Hewitson, Mal

    The concept of performance levels in criterion-referenced assessment is explored by applying the idea to different types of tests commonly used in schools, mastery tests (including diagnostic tests) and achievement tests. In mastery tests, a threshold performance standard must be established for each criterion. Attainment of this threshold…

  16. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    Science.gov (United States)

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  17. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  18. Interface Pattern Selection Criterion for Cellular Structures in Directional Solidification

    Science.gov (United States)

    Trivedi, R.; Tewari, S. N.; Kurtze, D.

    1999-01-01

    The aim of this investigation is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. We shall first address scientific concepts that are crucial in the selection of interface patterns. Next, the results of ground-based experimental studies in the Al-4.0 wt % Cu system will be described. Both experimental studies and theoretical calculations will be presented to establish the need for microgravity experiments.

  19. DATA ANALYSIS BY FORMAL METHODS OF ESTIMATION OF INDEXES OF RATING CRITERION IN PROCESS OF ACCUMULATION OF DATA ABOUT WORKING OF THE TEACHING STAFF

    Directory of Open Access Journals (Sweden)

    Alexey E. Fedoseev

    2014-01-01

    Full Text Available The article considers the development of formal methods of assessing the rating criterion exponents. The article deals with the mathematical model, which allows to connect together quantitative rating criterion characteristics, measured in various scales, with intuitive idea of them. The solution to the problem of rating criterion estimation is proposed.

  20. A criterion for heated pipe design by linear electric resistances

    International Nuclear Information System (INIS)

    Bloch, M.; Cruz, J.R.B.

    1984-01-01

    A criterion for linear eletrical elements instalation on horizontal tubes is obtainned in this work. This criterion is based upon the calculation of the thermal stresses caused by the non uniform temperature distribution in the tube cross section. The finite difference method and the SAP IV computer code are both used in the calculations. The criterion is applied to the thermal circuits of the IEN which has tube diameter varying from φ 1/2 in till φ 8 in. (author) [pt

  1. Novel global robust stability criterion for neural networks with delay

    International Nuclear Information System (INIS)

    Singh, Vimal

    2009-01-01

    A novel criterion for the global robust stability of Hopfield-type interval neural networks with delay is presented. An example illustrating the improvement of the present criterion over several recently reported criteria is given.

  2. Selectivity criterion for pyrazolo[3,4-b]pyrid[az]ine derivatives as GSK-3 inhibitors: CoMFA and molecular docking studies.

    Science.gov (United States)

    Patel, Dhilon S; Bharatam, Prasad V

    2008-05-01

    In the development of drugs targeted for GSK-3, its selective inhibition is an important requirement owing to the possibility of side effects arising from other kinases for the treatment of diabetes mellitus. A three-dimensional quantitative structure-activity relationship study (3D-QSAR) has been carried out on a set of pyrazolo[3,4-b]pyrid[az]ine derivatives, which includes non-selective and selective GSK-3 inhibitors. The CoMFA models were derived from a training set of 59 molecules. A test set containing 14 molecules (not used in model generation) was used to validate the CoMFA models. The best CoMFA model generated by applying leave-one-out (LOO) cross-validation study gave cross-validation r(cv)(2) and conventional r(conv)(2) values of 0.60 and 0.97, respectively, and r(pred)(2) value of 0.55, which provide the predictive ability of model. The developed models well explain (i) the observed variance in the activity and (ii) structural difference between the selective and non-selective GSK-3 inhibitors. Validation based on the molecular docking has also been carried out to explain the structural differences between the selective and non-selective molecules in the given series of molecules.

  3. Joined application of a multiaxial critical plane criterion and a strain energy density criterion in low-cycle fatigue

    Directory of Open Access Journals (Sweden)

    Andrea Carpinteri

    2017-07-01

    Full Text Available In the present paper, the multiaxial fatigue life assessment of notched structural components is performed by employing a strain-based multiaxial fatigue criterion. Such a criterion, depending on the critical plane concept, is extended by implementing the control volume concept reated to the Strain Energy Density (SED approach: a material point located at a certain distance from the notch tip is assumed to be the verification point where to perform the above assessment. Such a distance, measured along the notch bisector, is a function of both the biaxiality ratio (defined as the ratio between the applied shear stress amplitude and the normal stress amplitude and the control volume radii under Mode I and Mode III. Once the position of the verification point is determined, the fatigue lifetime is assessed through an equivalent strain amplitude, acting on the critical plane, together with a unique material reference curve (i.e. the Manson-Coffin curve. Some uniaxial and multiaxial fatigue data related to V-notched round bars made of titanium grade 5 alloy (Ti-6Al-4V are examined to validate the present criterion.

  4. Distance criterion for hydrogen bond

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Distance criterion for hydrogen bond. In a D-H ...A contact, the D...A distance must be less than the sum of van der Waals Radii of the D and A atoms, for it to be a hydrogen bond.

  5. A New Infrared Color Criterion for the Selection of 0 < z < 7 AGNs: Application to Deep Fields and Implications for JWST Surveys

    Science.gov (United States)

    Messias, H.; Afonso, J.; Salvato, M.; Mobasher, B.; Hopkins, A. M.

    2012-08-01

    It is widely accepted that observations at mid-infrared (mid-IR) wavelengths enable the selection of galaxies with nuclear activity, which may not be revealed even in the deepest X-ray surveys. Many mid-IR color-color criteria have been explored to accomplish this goal and tested thoroughly in the literature. Besides missing many low-luminosity active galactic nuclei (AGNs), one of the main conclusions is that, with increasing redshift, the contamination by non-active galaxies becomes significant (especially at z >~ 2.5). This is problematic for the study of the AGN phenomenon in the early universe, the main goal of many of the current and future deep extragalactic surveys. In this work new near- and mid-IR color diagnostics are explored, aiming for improved efficiency—better completeness and less contamination—in selecting AGNs out to very high redshifts. We restrict our study to the James Webb Space Telescope wavelength range (0.6-27 μm). The criteria are created based on the predictions by state-of-the-art galaxy and AGN templates covering a wide variety of galaxy properties, and tested against control samples with deep multi-wavelength coverage (ranging from the X-rays to radio frequencies). We show that the colors Ks - [4.5], [4.5] - [8.0], and [8.0] - [24] are ideal as AGN/non-AGN diagnostics at, respectively, z ~ 2.5-3. However, when the source redshift is unknown, these colors should be combined. We thus develop an improved IR criterion (using Ks and IRAC bands, KI) as a new alternative at z 50%-90% level of successful AGN selection). We also propose KIM (using Ks , IRAC, and MIPS 24 μm bands, KIM), which aims to select AGN hosts from local distances to as far back as the end of reionization (0 ~ 2.5. Overall, KIM shows a ~30%-40% completeness and a >70%-90% level of successful AGN selection. KI and KIM are built to be reliable against a ~10%-20% error in flux, are based on existing filters, and are suitable for immediate use.

  6. A model expansion criterion for treating surface topography in ray path calculations using the eikonal equation

    International Nuclear Information System (INIS)

    Ma, Ting; Zhang, Zhongjie

    2014-01-01

    Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)

  7. Criterion for the onset of quench for low-flow reflood

    International Nuclear Information System (INIS)

    Hsu, Y.Y.; Young, M.W.

    1982-07-01

    This study provides a criterion for the onset of quench for low flow reflood. The criterion is a combination of two conditions: T/sub clad/ < T/sub limiting quench/ where T = Temperature, and α < 0.95 where α = void fraction. This criterion was obtained by examining temperature data from tests simulating PWR reflood, such as FLECHT, THTF, PBF, CCTF, and FEBA tests, with void fraction data from CCTF, FEBA, and FLECHT low flood tests. The data show that quenching initiated at α = 0.95 and that the majority of quench occurred at void fractions near 0.85. The results show that rods can be completely quenched by entrained droplets even if the collapsed liquid level does not advance. A thorough discussion of the analysis which supports this quench criterion is given in the text of this report

  8. The alternative DSM-5 personality disorder traits criterion

    DEFF Research Database (Denmark)

    Bach, Bo; Maples-Keller, Jessica L; Bo, Sune

    2016-01-01

    The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association, 2013a) offers an alternative model for Personality Disorders (PDs) in Section III, which consists in part of a pathological personality traits criterion measured...... with the Personality Inventory for DSM-5 (PID-5). The PID-5 selfreport instrument currently exists in the original 220-item form, a short 100-item form, and a brief 25-item form. For clinicians and researchers, the choice of a particular PID- 5 form depends on feasibility, but also reliability and validity. The goal...

  9. Self-Adjointness Criterion for Operators in Fock Spaces

    International Nuclear Information System (INIS)

    Falconi, Marco

    2015-01-01

    In this paper we provide a criterion of essential self-adjointness for operators in the tensor product of a separable Hilbert space and a Fock space. The class of operators we consider may contain a self-adjoint part, a part that preserves the number of Fock space particles and a non-diagonal part that is at most quadratic with respect to the creation and annihilation operators. The hypotheses of the criterion are satisfied in several interesting applications

  10. Criterion-based laparoscopic training reduces total training time

    OpenAIRE

    Brinkman, Willem M.; Buzink, Sonja N.; Alevizos, Leonidas; de Hingh, Ignace H. J. T.; Jakimowicz, Jack J.

    2011-01-01

    Introduction The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. Methods During four training sessions within 1 week (one session per day) 34 medical interns (no laparoscopic experience) practiced on two basic tasks on the Simbionix LAP Mentor virtual-reality (VR) simulator: ‘clipping and grasping’ and ‘cutting’. Group C (criterion-based) (N...

  11. The precautionary principle as a rational decision criterion

    International Nuclear Information System (INIS)

    Hovi, Jon

    2001-12-01

    The paper asks if the precautionary principle may be seen as a rational decision criterion. Six main questions are discussed. 1. Does the principle basically represent a particular set of political options or is it a genuine decision criterion? 2. If it is the latter, can it be reduced to any of the existing criteria for decision making under uncertainty? 3. In what kinds of situation is the principle applicable? 4. What is the relation between the precautionary principle and other principles for environmental regulation? 5. How plausible is the principle's claim that the burden of proof should be reversed? 6. Do the proponents of environmental regulation carry no burden of proof at all? A main conclusion is that, for now at least, the principle contains too many unclear elements to satisfy the requirements of precision and consistency that should reasonably be satisfied by a rational decision criterion. (author)

  12. Probabilistic interpretation of the reduction criterion for entanglement

    International Nuclear Information System (INIS)

    Zhang, Zhengmin; Luo, Shunlong

    2007-01-01

    Inspired by the idea of conditional probabilities, we introduce a variant of conditional density operators. But unlike the conditional probabilities which are bounded by 1, the conditional density operators may have eigenvalues exceeding 1 for entangled states. This has the consequence that although any bivariate classical probability distribution has a natural separable decomposition in terms of conditional probabilities, we do not have a quantum analogue of this separable decomposition in general. The 'nonclassical' eigenvalues of conditional density operators are indications of entanglement. The resulting separability criterion turns out to be equivalent to the reduction criterion introduced by Horodecki [Phys. Rev. A 59, 4206 (1999)] and Cerf et al. [Phys. Rev. A 60, 898 (1999)]. This supplies an intuitive probabilistic interpretation for the reduction criterion. The conditional density operators are also used to define a form of quantum conditional entropy which provides an alternative mechanism to reveal quantum discord

  13. Slope Safety Factor Calculations With Non-Linear Yield Criterion Using Finite Elements

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars

    2006-01-01

    The factor of safety for a slope is calculated with the finite element method using a non-linear yield criterion of the Hoek-Brown type. The parameters of the Hoek-Brown criterion are found from triaxial test data. Parameters of the linear Mohr-Coulomb criterion are calibrated to the same triaxial...... are carried out at much higher stress levels than present in a slope failure, this leads to the conclusion that the use of the non-linear criterion leads to a safer slope design...

  14. Criterion for traffic phases in single vehicle data and empirical test of a microscopic three-phase traffic theory

    International Nuclear Information System (INIS)

    Kerner, Boris S; Klenov, Sergey L; Hiller, Andreas

    2006-01-01

    Based on empirical and numerical microscopic analyses, the physical nature of a qualitatively different behaviour of the wide moving jam phase in comparison with the synchronized flow phase-microscopic traffic flow interruption within the wide moving jam phase-is found. A microscopic criterion for distinguishing the synchronized flow and wide moving jam phases in single vehicle data measured at a single freeway location is presented. Based on this criterion, empirical microscopic classification of different local congested traffic states is performed. Simulations made show that the microscopic criterion and macroscopic spatiotemporal objective criteria lead to the same identification of the synchronized flow and wide moving jam phases in congested traffic. Microscopic models in the context of three-phase traffic theory have been tested based on the microscopic criterion for the phases in congested traffic. It is found that microscopic three-phase traffic models can explain both microscopic and macroscopic empirical congested pattern features. It is obtained that microscopic frequency distributions for vehicle speed difference as well as fundamental diagrams and speed correlation functions can depend on the spatial co-ordinate considerably. It turns out that microscopic optimal velocity (OV) functions and time headway distributions are not necessarily qualitatively different, even if local congested traffic states are qualitatively different. The reason for this is that important spatiotemporal features of congested traffic patterns are lost in these as well as in many other macroscopic and microscopic traffic characteristics, which are widely used as the empirical basis for a test of traffic flow models, specifically, cellular automata traffic flow models

  15. Direct numerical simulations of non-premixed ethylene-air flames: Local flame extinction criterion

    KAUST Repository

    Lecoustre, Vivien R.

    2014-11-01

    Direct Numerical Simulations (DNS) of ethylene/air diffusion flame extinctions in decaying two-dimensional turbulence were performed. A Damköhler-number-based flame extinction criterion as provided by classical large activation energy asymptotic (AEA) theory is assessed for its validity in predicting flame extinction and compared to one based on Chemical Explosive Mode Analysis (CEMA) of the detailed chemistry. The DNS code solves compressible flow conservation equations using high order finite difference and explicit time integration schemes. The ethylene/air chemistry is simulated with a reduced mechanism that is generated based on the directed relation graph (DRG) based methods along with stiffness removal. The numerical configuration is an ethylene fuel strip embedded in ambient air and exposed to a prescribed decaying turbulent flow field. The emphasis of this study is on the several flame extinction events observed in contrived parametric simulations. A modified viscosity and changing pressure (MVCP) scheme was adopted in order to artificially manipulate the probability of flame extinction. Using MVCP, pressure was changed from the baseline case of 1 atm to 0.1 and 10 atm. In the high pressure MVCP case, the simulated flame is extinction-free, whereas in the low pressure MVCP case, the simulated flame features frequent extinction events and is close to global extinction. Results show that, despite its relative simplicity and provided that the global flame activation temperature is correctly calibrated, the AEA-based flame extinction criterion can accurately predict the simulated flame extinction events. It is also found that the AEA-based criterion provides predictions of flame extinction that are consistent with those provided by a CEMA-based criterion. This study supports the validity of a simple Damköhler-number-based criterion to predict flame extinction in engineering-level CFD models. © 2014 The Combustion Institute.

  16. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    Science.gov (United States)

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Orientation selection of equiaxed dendritic growth by three-dimensional cellular automaton model

    Energy Technology Data Exchange (ETDEWEB)

    Wei Lei [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China); Lin Xin, E-mail: xlin@nwpu.edu.cn [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China); Wang Meng; Huang Weidong [State Key Laboratory of Solidification Processing, Northwestern Polytechnical University, Xi' an 710072 (China)

    2012-07-01

    A three-dimensional (3-D) adaptive mesh refinement (AMR) cellular automata (CA) model is developed to simulate the equiaxed dendritic growth of pure substance. In order to reduce the mesh induced anisotropy by CA capture rules, a limited neighbor solid fraction (LNSF) method is presented. It is shown that the LNSF method reduced the mesh induced anisotropy based on the simulated morphologies for isotropic interface free energy. An expansion description using two interface free energy anisotropy parameters ({epsilon}{sub 1}, {epsilon}{sub 2}) is used in the present 3-D CA model. It is illustrated by present 3-D CA model that the positive {epsilon}{sub 1} favors the dendritic growth with the Left-Pointing-Angle-Bracket 100 Right-Pointing-Angle-Bracket preferred directions, and negative {epsilon}{sub 2} favors dendritic growth with the Left-Pointing-Angle-Bracket 110 Right-Pointing-Angle-Bracket preferred directions, which has a good agreement with the prediction of the spherical plot of the inverse of the interfacial stiffness. The dendritic growths with the orientation selection between Left-Pointing-Angle-Bracket 100 Right-Pointing-Angle-Bracket and Left-Pointing-Angle-Bracket 110 Right-Pointing-Angle-Bracket are also discussed using the different {epsilon}{sub 1} with {epsilon}{sub 2}=-0.02. It is found that the simulated morphologies by present CA model are as expected from the minimum stiffness criterion.

  18. Do candidate reactions relate to job performance or affect criterion-related validity? A multistudy investigation of relations among reactions, selection test scores, and job performance.

    Science.gov (United States)

    McCarthy, Julie M; Van Iddekinge, Chad H; Lievens, Filip; Kung, Mei-Chuan; Sinar, Evan F; Campion, Michael A

    2013-09-01

    Considerable evidence suggests that how candidates react to selection procedures can affect their test performance and their attitudes toward the hiring organization (e.g., recommending the firm to others). However, very few studies of candidate reactions have examined one of the outcomes organizations care most about: job performance. We attempt to address this gap by developing and testing a conceptual framework that delineates whether and how candidate reactions might influence job performance. We accomplish this objective using data from 4 studies (total N = 6,480), 6 selection procedures (personality tests, job knowledge tests, cognitive ability tests, work samples, situational judgment tests, and a selection inventory), 5 key candidate reactions (anxiety, motivation, belief in tests, self-efficacy, and procedural justice), 2 contexts (industry and education), 3 continents (North America, South America, and Europe), 2 study designs (predictive and concurrent), and 4 occupational areas (medical, sales, customer service, and technological). Consistent with previous research, candidate reactions were related to test scores, and test scores were related to job performance. Further, there was some evidence that reactions affected performance indirectly through their influence on test scores. Finally, in no cases did candidate reactions affect the prediction of job performance by increasing or decreasing the criterion-related validity of test scores. Implications of these findings and avenues for future research are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved

  19. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  20. A Joint Optimization Criterion for Blind DS-CDMA Detection

    Directory of Open Access Journals (Sweden)

    Sergio A. Cruces-Alvarez

    2007-01-01

    Full Text Available This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.

  1. A Joint Optimization Criterion for Blind DS-CDMA Detection

    Science.gov (United States)

    Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.

    2006-12-01

    This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.

  2. Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion

    Science.gov (United States)

    Uenishi, K.; Yamachi, H.

    2017-12-01

    As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the

  3. The limits of the Bohm criterion in collisional plasmas

    International Nuclear Information System (INIS)

    Valentini, H.-B.; Kaiser, D.

    2015-01-01

    The sheath formation within a low-pressure collisional plasma is analysed by means of a two-fluid model. The Bohm criterion takes into account the effects of the electric field and the inertia of the ions. Numerical results yield that these effects contribute to the space charge formation, only, if the collisionality is lower than a relatively small threshold. It follows that a lower and an upper limit of the drift speed of the ions exist where the effects treated by Bohm can form a sheath. This interval becomes narrower as the collisionality increases and vanishes at the mentioned threshold. Above the threshold, the sheath is mainly created by collisions and the ionisation. Under these conditions, the sheath formation cannot be described by means of Bohm like criteria. In a few references, a so-called upper limit of the Bohm criterion is stated for collisional plasmas where the momentum equation of the ions is taken into account, only. However, the present paper shows that this limit results in an unrealistically steep increase of the space charge density towards the wall, and, therefore, it yields no useful limit of the Bohm velocity

  4. Sensitive criterion for chirality; Chiral doublet bands in 104Rh59

    International Nuclear Information System (INIS)

    Koike, T.; Starosta, K.; Vaman, C.; Ahn, T.; Fossan, D.B.; Clark, R.M.; Cromaz, M.; Lee, I.Y.; Macchiavelli, A.O.

    2003-01-01

    A particle plus triaxial rotor model was applied to odd-odd nuclei in the A ∼ 130 region in order to study the unique parity πh11/2xνh11/2 rotational bands. With maximum triaxiality assumed and the intermediate axis chosen as the quantization axis for the model calculations, the two lowest energy eigenstates of a given spin have chiral properties. The independence of the quantity S(I) on spin can be used as a new criterion for chirality. In addition, a diminishing staggering amplitude of S(I) with increasing spin implies triaxiality in neighboring odd-A nuclei. Chiral quartet bases were constructed specifically to examine electromagnetic properties for chiral structures. A set of selection rules unique to chirality was derived. Doublet bands built on the πg9/2xνh11/2 configuration have been discovered in odd-odd 104Rh using the 96Zr(11B, 3n) reaction. Based on the discussed criteria for chirality, it is concluded that the doublet bands observed in 104Rh exhibit characteristic chiral properties suggesting a new region of chirality around A ∼110. In addition, magnetic moment measurements have been performed to test the πh11/2xνh11/2 configuration in 128Cs and the πg9/2xνh11/2 configuration in 104Rh

  5. Assessing the factor structure of posttraumatic stress disorder symptoms in war-exposed youths with and without Criterion A2 endorsement.

    Science.gov (United States)

    Armour, Cherie; Layne, Christopher M; Naifeh, James A; Shevlin, Mark; Duraković-Belko, Elvira; Djapo, Nermin; Pynoos, Robert S; Elhai, Jon D

    2011-01-01

    Posttraumatic stress disorder's (PTSD) tripartite factor structure proposed by the DSM-IV is rarely empirically supported. Other four-factor models (King et al., 1998; Simms et al., 2002) have proven to better account for PTSD's latent structure; however, results regarding model superiority are conflicting. The current study assessed whether endorsement of PTSD's Criterion A2 would impact on the factorial invariance of the King et al. (1998) model. Participants were 1572 war-exposed Bosnian secondary students who were assessed two years following the 1992-1995 Bosnian conflict. The sample was grouped by those endorsing both parts of the DSM-IV Criterion A (A2 Group) and those endorsing only A1 (Non-A2 Group). The factorial invariance of the King et al. (1998) model was not supported between the A2 vs. Non-A2 Groups; rather, the groups significantly differed on all model parameters. The impact of removing A2 on the factor structure of King et al. (1998) PTSD model is discussed in light of the proposed removal of Criterion A2 for the DSM-V. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  7. Automated criterion-based analysis for Cole parameters assessment from cerebral neonatal electrical bioimpedance spectroscopy measurements

    International Nuclear Information System (INIS)

    Seoane, F; Lindecrantz, Kaj; Ward, L C; Lingwood, B E

    2012-01-01

    Hypothermia has been proven as an effective rescue therapy for infants with moderate or severe neonatal hypoxic ischemic encephalopathy. Hypoxia-ischemia alters the electrical impedance characteristics of the brain in neonates; therefore, spectroscopic analysis of the cerebral bioimpedance of the neonate may be useful for the detection of candidate neonates eligible for hypothermia treatment. Currently, in addition to the lack of reference bioimpedance data obtained from healthy neonates, there is no standardized approach established for bioimpedance spectroscopy data analysis. In this work, cerebral bioimpedance measurements (12 h postpartum) in a cross-section of 84 term and near-term healthy neonates were performed at the bedside in the post-natal ward. To characterize the impedance spectra, Cole parameters (R 0 , R ∞ , f C and α) were extracted from the obtained measurements using an analysis process based on a best measurement and highest likelihood selection process. The results obtained in this study complement previously reported work and provide a standardized criterion-based method for data analysis. The availability of electrical bioimpedance spectroscopy reference data and the automatic criterion-based analysis method might support the development of a non-invasive method for prompt selection of neonates eligible for cerebral hypothermic rescue therapy. (paper)

  8. Description of a developmental criterion-referenced assessment for promoting competence in internal medicine residents.

    Science.gov (United States)

    Varney, Andrew; Todd, Christine; Hingle, Susan; Clark, Michael

    2009-09-01

    End-of- rotation global evaluations can be subjective, produce inflated grades, lack interrater reliability, and offer information that lacks value. This article outlines the generation of a unique developmental criterion-referenced assessment that applies adult learning theory and the learner, manager, teacher model, and represents an innovative application to the American Board of Internal Medicine (ABIM) 9-point scale. We describe the process used by Southern Illinois University School of Medicine to develop rotation-specific, criterion-based evaluation anchors that evolved into an effective faculty development exercise. The intervention gave faculty a clearer understanding of the 6 Accreditation Council for Graduate Medical Education competencies, each rotation's educational goals, and how rotation design affects meaningful work-based assessment. We also describe easily attainable successes in evaluation design and pitfalls that other institutions may be able to avoid. Shifting the evaluation emphasis on the residents' development of competence has made the expectations of rotation faculty more transparent, has facilitated conversations between program director and residents, and has improved the specificity of the tool for feedback. Our findings showed the new approach reduced grade inflation compared with the ABIM end-of-rotation global evaluation form. We offer the new developmental criterion-referenced assessment as a unique application of the competences to the ABIM 9-point scale as a transferable model for improving the validity and reliability of resident evaluations across graduate medical education programs.

  9. Characteristics of Criteria for Selecting Investment Projects under Uncertainty

    Directory of Open Access Journals (Sweden)

    Adrian ENCIU

    2011-07-01

    Full Text Available Within financial theory and practice, there are used five main criteria for selecting investment projects: the net present value (NPV criterion, the internal rate of return (IRR criterion, the return term (RT criterion, the profitability ratio (PR criterion and the supplementary return (SR criterion. The assay will emphasize several new properties of said indexes for investment assessment, having as starting point the hypotheses of (approximately normal repartition of cash-flows generated by an investment project. The obtained results point to the fact that the NPV indexes (the analysis of this criterion was carried out in the article “The NPV Criterion for Valuing Investments under Uncertainty”, Daniel Armeanu, Leonard Lache, Economic Computation and Economic Cybernetics Studies and Research no. 4/2009, pp. 133-143, IRR, PR, RT and SR register normal repartitions, therefore simplifying the investment analysis under economic uncertainty, by the capacity of building confidence intervals and assessing probabilities for the inferior limits of said investment assessment indexes.

  10. Bayesian Comparison of Alternative Graded Response Models for Performance Assessment Applications

    Science.gov (United States)

    Zhu, Xiaowen; Stone, Clement A.

    2012-01-01

    This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…

  11. Model selection in periodic autoregressions

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1994-01-01

    textabstractThis paper focuses on the issue of period autoagressive time series models (PAR) selection in practice. One aspect of model selection is the choice for the appropriate PAR order. This can be of interest for the valuation of economic models. Further, the appropriate PAR order is important

  12. A new LP formulation of the admission control problem modelled as an MDP under average reward criterion

    Science.gov (United States)

    Pietrabissa, Antonio

    2011-12-01

    The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C + 1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.

  13. Interface Pattern Selection in Directional Solidification

    Science.gov (United States)

    Trivedi, Rohit; Tewari, Surendra N.

    2001-01-01

    The central focus of this research is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. Ground-based studies have established that the conditions under which cellular and dendritic microstructures form are precisely where convection effects are dominant in bulk samples. Thus, experimental data can not be obtained terrestrially under pure diffusive regime. Furthermore, reliable theoretical models are not yet possible which can quantitatively incorporate fluid flow in the pattern selection criterion. Consequently, microgravity experiments on cellular and dendritic growth are designed to obtain benchmark data under diffusive growth conditions that can be quantitatively analyzed and compared with the rigorous theoretical model to establish the fundamental principles that govern the selection of specific microstructure and its length scales. In the cellular structure, different cells in an array are strongly coupled so that the cellular pattern evolution is controlled by complex interactions between thermal diffusion, solute diffusion and interface effects. These interactions give infinity of solutions, and the system selects only a narrow band of solutions. The aim of this investigation is to obtain benchmark data and develop a rigorous theoretical model that will allow us to quantitatively establish the physics of this selection process.

  14. Assessing stress-related treatment needs among girls at risk for poor functional outcomes: The impact of cumulative adversity, criterion traumas, and non-criterion events.

    Science.gov (United States)

    Lansing, Amy E; Plante, Wendy Y; Beck, Audrey N

    2017-05-01

    Despite growing recognition that cumulative adversity (total stressor exposure, including complex trauma), increases the risk for psychopathology and impacts development, assessment strategies lag behind: Adversity-related mental health needs (symptoms, functional impairment, maladaptive coping) are typically assessed in response to only one qualifying Criterion-A traumatic event. This is especially problematic for youth at-risk for health and academic disparities who experience cumulative adversity, including non-qualifying events (separation from caregivers) which may produce more impairing symptomatology. Data from 118 delinquent girls demonstrate: (1) an average of 14 adverse Criterion-A and non-Criterion event exposures; (2) serious maladaptive coping strategies (self-injury) directly in response to cumulative adversity; (3) more cumulative adversity-related than worst-event related symptomatology and functional impairment; and (4) comparable symptomatology, but greater functional impairment, in response to non-Criterion events. These data support the evaluation of mental health needs in response to cumulative adversity for optimal identification and tailoring of services in high-risk populations to reduce disparities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Theoretical modeling of CHF for near-saturated pool boiling and flow boiling from short heaters using the interfacial lift-off criterion

    International Nuclear Information System (INIS)

    Mudawar, I.; Galloway, J.E.; Gersey, C.O.

    1995-01-01

    Pool boiling and flow boiling were examined for near-saturated bulk conditions in order to determine the critical heat flux (CHF) trigger mechanism for each. Photographic studies of the wall region revealed features common to both situations. At fluxes below CHF, the vapor coalesces into a wavy layer which permits wetting only in wetting fronts, the portions of the liquid-vapor interface which contact the wall as a result of the interfacial waviness. Close examination of the interfacial features revealed the waves are generated from the lower edge of the heater in pool boiling and the heater's upstream region in flow boiling. Wavelengths follow predictions based upon the Kelvin-Helmholtz instability criterion. Critical heat flux in both cases occurs when the pressure force exerted upon the interface due to interfacial curvature, which tends to preserve interfacial contact with the wall prior to CHF, is overcome by the momentum of vapor at the site of the first wetting front, causing the interface to lift away from the wall. It is shown this interfacial lift-off criterion facilitates accurate theoretical modeling of CHF in pool boiling and in flow boiling in both straight and curved channels

  16. Theoretical modeling of CHF for near-saturated pool boiling and flow boiling from short heaters using the interfacial lift-off criterion

    Energy Technology Data Exchange (ETDEWEB)

    Mudawar, I.; Galloway, J.E.; Gersey, C.O. [Purdue Univ., West Lafayette, IN (United States)] [and others

    1995-12-31

    Pool boiling and flow boiling were examined for near-saturated bulk conditions in order to determine the critical heat flux (CHF) trigger mechanism for each. Photographic studies of the wall region revealed features common to both situations. At fluxes below CHF, the vapor coalesces into a wavy layer which permits wetting only in wetting fronts, the portions of the liquid-vapor interface which contact the wall as a result of the interfacial waviness. Close examination of the interfacial features revealed the waves are generated from the lower edge of the heater in pool boiling and the heater`s upstream region in flow boiling. Wavelengths follow predictions based upon the Kelvin-Helmholtz instability criterion. Critical heat flux in both cases occurs when the pressure force exerted upon the interface due to interfacial curvature, which tends to preserve interfacial contact with the wall prior to CHF, is overcome by the momentum of vapor at the site of the first wetting front, causing the interface to lift away from the wall. It is shown this interfacial lift-off criterion facilitates accurate theoretical modeling of CHF in pool boiling and in flow boiling in both straight and curved channels.

  17. Improved Robust Stability Criterion of Networked Control Systems with Transmission Delays and Packet Loss

    Directory of Open Access Journals (Sweden)

    Shenping Xiao

    2014-01-01

    Full Text Available The problem of stability analysis for a class of networked control systems (NCSs with network-induced delay and packet dropout is investigated in this paper. Based on the working mechanism of zero-order holder, the closed-loop NCS is modeled as a continuous-time linear system with input delay. By introducing a novel Lyapunov-Krasovskii functional which splits both the lower and upper bounds of the delay into two subintervals, respectively, and utilizes reciprocally convex combination technique, a new stability criterion is derived in terms of linear matrix inequalities. Compared with previous results in the literature, the obtained stability criterion is less conservative. Numerical examples demonstrate the validity and feasibility of the proposed method.

  18. Criterion for the engineering performance of carbon materials under neutron irradiation

    International Nuclear Information System (INIS)

    Virgil'ev, Yu.S.

    2002-01-01

    The criterion for the engineering performance and substation of its applicability to the reactor graphite are proposed. The complex indicator, representing the ratio of strength limits by compression and bending is proposed as the above criterion, characterizing the graphite quality. The growth of this criterion indicates the accumulation of large heterogeneities microcracks of technological or radiation character. The decrease in this indicator testifies to the growth of small heterogeneities, and consequently to the increase in the graphite engineering performance [ru

  19. Entanglement criterion for tripartite systems based on local sum uncertainty relations

    Science.gov (United States)

    Akbari-Kourbolagh, Y.; Azhdargalam, M.

    2018-04-01

    We propose a sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations for arbitrarily chosen observables of subsystems. This criterion generalizes the tighter criterion for bipartite systems introduced by Zhang et al. [C.-J. Zhang, H. Nha, Y.-S. Zhang, and G.-C. Guo, Phys. Rev. A 81, 012324 (2010), 10.1103/PhysRevA.81.012324] and can be used for both discrete- and continuous-variable systems. It enables us to detect the entanglement of quantum states without having a complete knowledge of them. Its utility is illustrated by some examples of three-qubit, qutrit-qutrit-qubit, and three-mode Gaussian states. It is found that, in comparison with other criteria, this criterion is able to detect some three-qubit bound entangled states more efficiently.

  20. Mercier criterion for high-β tokamaks

    International Nuclear Information System (INIS)

    Galvao, R.M.O.

    1984-01-01

    An expression, for the application of the Mercier criterion to numerical studies of diffuse high-β tokamaks (β approximatelly Σ,q approximatelly 1), which contains only leading order contributions in the high-β tokamak approximation is derived. (L.C.) [pt

  1. Electronic Devices, Methods, and Computer Program Products for Selecting an Antenna Element Based on a Wireless Communication Performance Criterion

    DEFF Research Database (Denmark)

    2014-01-01

    A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception...... and transmission path and a second one of the plurality of antenna elements to a diversity wireless signal reception path based on the performance evaluation....

  2. Band Subset Selection for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Chunyan Yu

    2018-01-01

    Full Text Available This paper develops a new approach to band subset selection (BSS for hyperspectral image classification (HSIC which selects multiple bands simultaneously as a band subset, referred to as simultaneous multiple band selection (SMMBS, rather than one band at a time sequentially, referred to as sequential multiple band selection (SQMBS, as most traditional band selection methods do. In doing so, a criterion is particularly developed for BSS that can be used for HSIC. It is a linearly constrained minimum variance (LCMV derived from adaptive beamforming in array signal processing which can be used to model misclassification errors as the minimum variance. To avoid an exhaustive search for all possible band subsets, two numerical algorithms, referred to as sequential (SQ and successive (SC algorithms are also developed for LCMV-based SMMBS, called SQ LCMV-BSS and SC LCMV-BSS. Experimental results demonstrate that LCMV-based BSS has advantages over SQMBS.

  3. Study on the quantitative rod internal pressure design criterion

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan; Han, Hee Tak

    1991-01-01

    The current rod internal pressure criterion permits fuel rods to operate with internal pressures in excess of system pressure only if internal overpressure does not cause the diametral gap enlargement. In this study, the generic allowable internal gas pressure not violating this criterion is estimated as a function of rod power. The results show that the generic allowable internal gas pressure decreases linearly with the increase of rod power. Application of the generic allowable internal gas pressure for the rod internal pressure design criterion will result in the simplication of the current design procedure for checking the diametral gap enlargement caused by internal overpressure because according to the current design procedure the cladding creepout rate should be compared with the fuel swelling rate at each axial node at each time step whenever internal pressure exceeds the system pressure. (Author)

  4. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  5. Effects of spatial and selective attention on basic multisensory integration

    DEFF Research Database (Denmark)

    Gondan, Matthias; Blurton, Steven Paul; Hughes, F.

    2011-01-01

    underlying the RSE. We investigated the role of spatial and selective attention on the RSE in audiovisual redundant signals tasks. In Experiment 1, stimuli were presented either centrally (narrow attentional focus) or at 1 of 3 unpredictable locations (wide focus). The RSE was accurately described...... task) or to central stimuli only (selective attention task). The RSE was consistent with task-specific coactivation models; accumulation of evidence, however, differed between the 2 tasks....... by a coactivation model assuming linear superposition of modality-specific activation. Effects of spatial attention were explained by a shift of the evidence criterion. In Experiment 2, stimuli were presented at 3 locations; participants had to respond either to all signals regardless of location (simple response...

  6. Experimental Investigation of the Peak Shear Strength Criterion Based on Three-Dimensional Surface Description

    Science.gov (United States)

    Liu, Quansheng; Tian, Yongchao; Ji, Peiqi; Ma, Hao

    2018-04-01

    The three-dimensional (3D) morphology of joints is enormously important for the shear mechanical properties of rock. In this study, three-dimensional morphology scanning tests and direct shear tests are conducted to establish a new peak shear strength criterion. The test results show that (1) surface morphology and normal stress exert significant effects on peak shear strength and distribution of the damage area. (2) The damage area is located at the steepest zone facing the shear direction; as the normal stress increases, it extends from the steepest zone toward a less steep zone. Via mechanical analysis, a new formula for the apparent dip angle is developed. The influence of the apparent dip angle and the average joint height on the potential contact area is discussed, respectively. A new peak shear strength criterion, mainly applicable to specimens under compression, is established by using new roughness parameters and taking the effects of normal stress and the rock mechanical properties into account. A comparison of this newly established model with the JRC-JCS model and the Grasselli's model shows that the new one could apparently improve the fitting effect. Compared with earlier models, the new model is simpler and more precise. All the parameters in the new model have clear physical meanings and can be directly determined from the scanned data. In addition, the indexes used in the new model are more rational.

  7. A theoretical derivation of the Hoek–Brown failure criterion for rock materials

    Directory of Open Access Journals (Sweden)

    Jianping Zuo

    2015-08-01

    Full Text Available This study uses a three-dimensional crack model to theoretically derive the Hoek–Brown rock failure criterion based on the linear elastic fracture theory. Specifically, we argue that a failure characteristic factor needs to exceed a critical value when macro-failure occurs. This factor is a product of the micro-failure orientation angle (characterizing the density and orientation of damaged micro-cracks and the changing rate of the angle with respect to the major principal stress (characterizing the microscopic stability of damaged cracks. We further demonstrate that the factor mathematically leads to the empirical Hoek–Brown rock failure criterion. Thus, the proposed factor is able to successfully relate the evolution of microscopic damaged crack characteristics to macro-failure. Based on this theoretical development, we also propose a quantitative relationship between the brittle–ductile transition point and confining pressure, which is consistent with experimental observations.

  8. Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-05-01

    The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.

  9. Conflicting views on a neutrality criterion for radioactive-waste management

    International Nuclear Information System (INIS)

    Cochran, T.B.; Bodde, D.L.

    1983-01-01

    Two essays are presented by authors who agree that risks imposed on future generations through the management of radioactive waste are acceptable if they meet a criterion of neutrality, but who disagree on the interpretation of the neutrality criterion. The first viewpoint argues that acceptable isolation of high-level radioactive waste is yet to be accomplished and that a fundamental criterion for radioactive waste disposal must include consideration of the intergenerational radiation effects. The second essay promotes balanced resource allocation, technological progress, and adequate problem-solving institutions as the solution to the problem. The debate illustrates the complexities involved in applying philosophical principles to public policy, even when the principles have been agreed upon. 23 references, 1 figure

  10. Penerapan Model Beneish (1999 dan Model Altaman (2000 dalam Pendektesian Kecurangan Laporan Keuangan

    Directory of Open Access Journals (Sweden)

    Rima Novi Kartikasari

    2010-08-01

    Full Text Available Financial fraud is costly and it can be done by almost everyone within (and outside of an organization. Prevention through early detection of fraud is an important way to reduce fraud. The objective of this research is to prove whether certain models may be utilized to detect financial statement fraud. This study is using Beneish’s (1999 and Altman (2000 models to detect financial statement fraud. Two selected samples that meet with pre-determined criterion have been selected and explored. The result of the study shows that those models can be used to detect financial statement fraud.

  11. Elitism, Sharing and Ranking Choices in Evolutionary Multi-Criterion Optimisation

    OpenAIRE

    Pursehouse, R.C.; Fleming, P.J.

    2002-01-01

    Elitism and sharing are two mechanisms that are believed to improve the performance of an evolutionary multi-criterion optimiser. The relative performance of of the two most popular ranking strategies is largely unknown. Using a new empirical inquiry framework, this report studies the effect of elitism, sharing and ranking design choices using a benchmark suite of two-criterion problems.........

  12. Validation of a Criterion for Cam Mechanisms Optimization Using Constraints upon Cam’s Curvature

    Directory of Open Access Journals (Sweden)

    Stelian Alaci

    2016-06-01

    Full Text Available For the mechanism with rotating cam and knife-edge follower, an optimization criterion by means of imposed constraints upon cam’s curvature is expressed in a special coordinate system. Thus, stating the optimization criterion in the coordinate system defined by the mechanisms constructive parameters -eccentricity and minimum follower’s stroke, a contour is obtained for any position of the mechanism. The optimization criterion assumes establishing the position of the characteristic point of the mechanism with respect to this contour. Fulfillment of optimization criterion assumes that the characteristic point is positioned in the same manner with respect to all contours. The optimization criterion is simplified when considering the envelope of the contours. The method is exemplified using two mechanisms, with the cams priori satisfying the criterion.

  13. Criterion and Divergent Validity of the Sexual Minority Adolescent Stress Inventory

    Directory of Open Access Journals (Sweden)

    Jeremy T. Goldbach

    2017-11-01

    Full Text Available Sexual minority adolescents (SMA consistently report health disparities compared to their heterosexual counterparts, yet the underlying mechanisms of these negative health outcomes remain unclear. The predominant explanatory model is the minority stress theory; however, this model was developed largely with adults, and no valid and comprehensive measure of minority stress has been developed for adolescents. The present study validated a newly developed instrument to measure minority stress among racially and ethnically diverse SMA. A sample of 346 SMA aged 14–17 was recruited and surveyed between February 2015 and July 2016. The focal measure of interest was the 64-item, 11-factor Sexual Minority Adolescent Stress Inventory (SMASI developed in the initial phase of this study. Criterion validation measures included measures of depressive symptoms, suicidality and self-harm, youth problem behaviors, and substance use; the general Adolescent Stress Questionnaire (ASQ was included as a measure of divergent validity. Analyses included Pearson and tetrachoric correlations to establish criterion and divergent validity and structural equation modeling to assess the explanatory utility of the SMASI relative to the ASQ. SMASI scores were significantly associated with all outcomes but only moderately associated with the ASQ (r = −0.13 to 0.51. Analyses revealed significant associations of a latent minority stress variable with both proximal and distal health outcomes beyond the variation explained by general stress. Results show that the SMASI is the first instrument to validly measure minority stress among SMA.

  14. Developing a Green Supplier Selection Model by Using the DANP with VIKOR

    Directory of Open Access Journals (Sweden)

    Tsai Chi Kuo

    2015-02-01

    Full Text Available This study proposes a novel hybrid multiple-criteria decision-making (MCDM method to evaluate green suppliers in an electronics company. Seventeen criteria in two dimensions concerning environmental and management systems were identified under the Code of Conduct of the Electronic Industry Citizenship Coalition (EICC. Following this, the Decision-Making Trial and Evaluation Laboratory (DEMATEL used the Analytic Network Process (ANP method (known as DANP to determine both the importance of evaluation criteria in selecting suppliers and the causal relationships between them. Finally, the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR method was used to evaluate the environmental performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of an electronics company was presented to demonstrate how to select green suppliers.

  15. A uniqueness criterion for the Fock quantization of scalar fields with time-dependent mass

    International Nuclear Information System (INIS)

    Cortez, Jeronimo; Mena Marugan, Guillermo A; Olmedo, Javier; Velhinho, Jose M

    2011-01-01

    A major problem in the quantization of fields in curved spacetimes is the ambiguity in the choice of a Fock representation for the canonical commutation relations. There exists infinite number of choices leading to different physical predictions. In stationary scenarios, a common strategy is to select a vacuum (or a family of unitarily equivalent vacua) by requiring invariance under the spacetime symmetries. When stationarity is lost, a natural generalization consists in replacing time invariance by unitarity in the evolution. We prove that when the spatial sections are compact, the criterion of a unitary dynamics, together with the invariance under the spatial isometries, suffices to select a unique family of Fock quantizations for a scalar field with time-dependent mass. (fast track communication)

  16. Shrinkage Porosity Criterion and Its Application to A 5.5 Ton Steel Ingot

    Directory of Open Access Journals (Sweden)

    Zhang C.

    2016-06-01

    Full Text Available In order to predict the distribution of shrinkage porosity in steel ingot efficiently and accurately, a criterion R√L and a method to obtain its threshold value were proposed. The criterion R√L was derived based on the solidification characteristics of steel ingot and pressure gradient in the mushy zone, in which the physical properties, the thermal parameters, the structure of the mushy zone and the secondary dendrite arm spacing were all taken into consideration. The threshold value of the criterion R√L was obtained with combination of numerical simulation of ingot solidification and total solidification shrinkage rate. Prediction of the shrinkage porosity in a 5.5 ton ingot of 2Cr13 steel with criterion R√L>0.21 m · °C1/2 · s−3/2 agreed well with the results of experimental sectioning. Based on this criterion, optimization of the ingot was carried out by decreasing the height-to-diameter ratio and increasing the taper, which successfully eliminated the centreline porosity and further proved the applicability of this criterion.

  17. Predicting work Performance through selection interview ratings and Psychological assessment

    Directory of Open Access Journals (Sweden)

    Liziwe Nzama

    2008-11-01

    Full Text Available The aim of the study was to establish whether selection interviews used in conjunction with psychological assessments of personality traits and cognitive functioning contribute to predicting work performance. The sample consisted of 102 managers who were appointed recently in a retail organisation. The independent variables were selection interview ratings obtained on the basis of structured competency-based interview schedules by interviewing panels, fve broad dimensions of personality defned by the Five Factor Model as measured by the 15 Factor Questionnaire (15FQ+, and cognitive processing variables (current level of work, potential level of work, and 12 processing competencies measured by the Cognitive Process Profle (CPP. Work performance was measured through annual performance ratings that focused on measurable outputs of performance objectives. Only two predictor variables correlated statistically signifcantly with the criterion variable, namely interview ratings (r = 0.31 and CPP Verbal Abstraction (r = 0.34. Following multiple regression, only these variables contributed signifcantly to predicting work performance, but only 17.8% of the variance of the criterion was accounted for.

  18. Judging Criterion of Controlled Structures with Closely Spaced Natural Frequencies

    International Nuclear Information System (INIS)

    Xie Faxiang; Sun Limin

    2010-01-01

    The structures with closely spaced natural frequencies widely exist in civil engineering; however, the judging criterion of the density of closely spaced frequencies is in dispute. This paper suggests a judging criterion for structures with closely spaced natural frequencies based on the analysis on a controlled 2-DOF structure. The analysis results indicate that the optimal control gain of the structure with velocity feedback is dependent on the frequency density parameter of structure and the maximum attainable additional modal damping ratio is 1.72 times of the frequency density parameter when state feedback is applied. Based on a brief review on the previous researches, a judging criterion related the minimum frequency density parameter and the required mode damping ratio was proposed.

  19. A New Criterion for Prediction of Hot Tearing Susceptibility of Cast Alloys

    Science.gov (United States)

    Nasresfahani, Mohamad Reza; Niroumand, Behzad

    2014-08-01

    A new criterion for prediction of hot tearing susceptibility of cast alloys is suggested which takes into account the effects of both important mechanical and metallurgical factors and is believed to be less sensitive to the presence of volume defects such as bifilms and inclusions. The criterion was validated by studying the hot tearing tendency of Al-Cu alloy. In conformity with the experimental results, the new criterion predicted reduction of hot tearing tendency with increasing the copper content.

  20. General Criterion for Harmonicity

    Science.gov (United States)

    Proesmans, Karel; Vandebroek, Hans; Van den Broeck, Christian

    2017-10-01

    Inspired by Kubo-Anderson Markov processes, we introduce a new class of transfer matrices whose largest eigenvalue is determined by a simple explicit algebraic equation. Applications include the free energy calculation for various equilibrium systems and a general criterion for perfect harmonicity, i.e., a free energy that is exactly quadratic in the external field. As an illustration, we construct a "perfect spring," namely, a polymer with non-Gaussian, exponentially distributed subunits which, nevertheless, remains harmonic until it is fully stretched. This surprising discovery is confirmed by Monte Carlo and Langevin simulations.

  1. [Silvicultural treatments and their selection effects].

    Science.gov (United States)

    Vincent, G

    1973-01-01

    Selection can be defined in terms of its observable consequences as the non random differential reproduction of genotypes (Lerner 1958). In the forest stands we are selecting during the improvements-fellings and reproduction treatments the individuals surpassing in growth or in production of first-class timber. However the silvicultural treatments taken in forest stands guarantee a permanent increase of forest production only in such cases, if they have been taken with respect to the principles of directional (dynamic) selection. These principles require that the trees determined for further growing and for forest regeneration are selected by their hereditary properties, i.e. by their genotypes.For making this selection feasible, our study deals with the genetic parameters and gives some examples of the application of the response, the selection differential, the heritability in the narrow and in the broad sense, as well as of the genetic and genotypic gain. On the strength of this parameter we have the possibility to estimate the economic success of several silvicultural treatments in forest stands.The mentioned examples demonstrate that the selection measures of a higher intensity will be manifested in a higher selection differential, in a higher genetic and genotypic gain and that the mentioned measures show more distinct effects in the variable populations - in natural forest - than in the population characteristic by a smaller variability, e.g. in many uniform artificially established stands.The examples of influences of different selection on the genotypes composition of population prove that genetics instructs us to differentiate the different genotypes of the same species and gives us at the same time a new criterions for evaluating selectional treatments. These criterions from economic point of view is necessary to consider in silviculture as advantageous even for the reason that we can judge from these criterions the genetical composition of forest stands

  2. The role of word choice and criterion on intentional memory.

    Science.gov (United States)

    Toyota, Hiroshi

    2015-02-01

    The relationship between the criterion for choosing and the self-choice effects (greater recall in a self-choice compared to a forced-choice condition) on intentional memory was examined. Thirty-three female nursing school volunteers were administered 24 word pairs in a 2 × 2 design to assess the influence of motivation upon free recall. When word pairs were presented to participants, they were asked to choose a word to-be-remembered, either in a self-choice condition or a forced-choice condition. Words chosen by the participants were recalled more often than those chosen by the experimenter (forced choice). Thus, the self-choice effect was greater for words chosen with a self-reference criterion compared to a metamemory criterion, supporting the integration hypothesis as the origin of the self-choice effect.

  3. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  4. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    Science.gov (United States)

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  5. Development of Predictor and Criterion Measures for the NCO21 Research Program

    National Research Council Canada - National Science Library

    Knapp, Deidre

    2002-01-01

    ... incorporated into an NCO performance management system geared to 21st century job demands. This report documents the design and development of predictor and criterion measures that will be used in a criterion-related validation data collection...

  6. A Controlled Evaluation of the Distress Criterion for Binge Eating Disorder

    Science.gov (United States)

    Grilo, Carlos M.; White, Marney A.

    2011-01-01

    Objective: Research has examined various aspects of the validity of the research criteria for binge eating disorder (BED) but has yet to evaluate the utility of Criterion C, "marked distress about binge eating." This study examined the significance of the marked distress criterion for BED using 2 complementary comparison groups. Method:…

  7. Effects of Mastery Criterion on the Emergence of Derived Equivalence Relations

    Science.gov (United States)

    Fienup, Daniel M.; Brodsky, Julia

    2017-01-01

    In this study, we manipulated mastery criterion form (rolling or block) and stringency (across 6 or 12 trials) and measured the emergence of derived relations. College students learned neuroanatomy equivalence classes and experienced one of two rolling mastery criteria (6 or 12 consecutive correct responses) or a block mastery criterion (12 trials…

  8. Parametric optimal control of uncertain systems under an optimistic value criterion

    Science.gov (United States)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  9. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  10. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    Science.gov (United States)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  11. Slope stability analysis using limit equilibrium method in nonlinear criterion.

    Science.gov (United States)

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.

  12. The Leadership Criterion in Technological Institute

    International Nuclear Information System (INIS)

    Carvalho, Marcelo Souza de; Cussa, Adriana Lourenco d'Avila; Suita, Julio Cezar

    2005-01-01

    This paper introduces the Direction's 'Decision Making Practice'. It has recently been reviewed with the merging of the beddings of the Leadership Criterion (CE-PNQ). These changes improved the control of institutional plans of action which are the result of the global performance critical analysis and other information associated with the Decision Making Practice. (author)

  13. Fretting-wear damage of heat exchanger tubes: a proposed damage criterion based on tube vibration response

    International Nuclear Information System (INIS)

    Yetisir, M.; McKerrow, E.; Pettigrew, M.J.

    1997-01-01

    A simple criterion is proposed to estimate fretting-wear damage in heat exchanger tubes with clearance supports. The criterion is based on parameters such as vibration frequency, mid-span vibration amplitude, span length, tube mass and an empirical wear coefficient. It is generally accepted that fretting-wear damage is proportional to a parameter called work-rate. Work-rate is a measure of the dynamic interaction between a vibrating tube and its supports. Due to the complexity of the impact-sliding behavior at the clearance-supports, work-rate calculations for heat exchanger tubes require specialized non-linear finite element codes. These codes include contact models for various clearance-support geometries. Such non-linear finite element analyses are complex, expensive and time consuming. The proposed criterion uses the results of linear vibration analysis (i.e., vibration frequency and mid-span vibration amplitude due to turbulence) and does not require a non-linear analysis. It can be used by non-specialists for a quick evaluation of the expected work-rate, and hence, the fretting-wear damage of heat exchanger tubes. The proposed criterion was obtained from an extensive parametric study that was conducted using a non-linear finite element program. It is shown that, by using the proposed work-rate criteria, work-rate can be estimated within a factor of two. This result, however, requires further testing with more complicated flow patterns. (author)

  14. Spatial frequency discrimination: visual long-term memory or criterion setting?

    Science.gov (United States)

    Lages, M; Treisman, M

    1998-02-01

    A long-term sensory memory is believed to account for spatial frequency discrimination when reference and test stimuli are separated by long intervals. We test an alternative proposal: that discrimination is determined by the range of test stimuli, through their entrainment of criterion-setting processes. Experiments 1 and 2 show that the 50% point of the psychometric function is largely determined by the midpoint of the stimulus range, not by the reference stimulus. Experiment 3 shows that discrimination of spatial frequencies is similarly affected by orthogonal contextual stimuli and parallel contextual stimuli and that these effects can be explained by criterion-setting processes. These findings support the hypothesis that discrimination over long intervals is explained by the operation of criterion-setting processes rather than by long-term sensory retention of a neural representation of the stimulus.

  15. LSSVM-Based Rock Failure Criterion and Its Application in Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Changxing Zhu

    2015-01-01

    Full Text Available A rock failure criterion is very important for the prediction of the failure of rocks or rock masses in rock mechanics and engineering. Least squares support vector machines (LSSVM are a powerful tool for addressing complex nonlinear problems. This paper describes a LSSVM-based rock failure criterion for analyzing the deformation of a circular tunnel under different in situ stresses without assuming a function form. First, LSSVM was used to represent the nonlinear relationship between the mechanical properties of rock and the failure behavior of the rock in order to construct a rock failure criterion based on experimental data. Then, this was used in a hypothetical numerical analysis of a circular tunnel to analyze the mechanical behavior of the rock mass surrounding the tunnel. The Mohr-Coulomb and Hoek-Brown failure criteria were also used to analyze the same case, and the results were compared; these clearly indicate that LSSVM can be used to establish a rock failure criterion and to predict the failure of a rock mass during excavation of a circular tunnel.

  16. Optimal design of constant-stress accelerated degradation tests using the M-optimality criterion

    International Nuclear Information System (INIS)

    Wang, Han; Zhao, Yu; Ma, Xiaobing; Wang, Hongyu

    2017-01-01

    In this paper, we propose the M-optimality criterion for designing constant-stress accelerated degradation tests (ADTs). The newly proposed criterion concentrates on the degradation mechanism equivalence rather than evaluation precision or prediction accuracy which is usually considered in traditional optimization criteria. Subject to the constraints of total sample number, test termination time as well as the stress region, an optimum constant-stress ADT plan is derived by determining the combination of stress levels and the number of samples allocated to each stress level, when the degradation path comes from inverse Gaussian (IG) process model with covariates and random effects. A numerical example is presented to verify the robustness of our proposed optimum plan and compare its efficiency with other test plans. Results show that, with a slightly relaxed requirement of evaluation precision and prediction accuracy, our proposed optimum plan reduces the dispersion of the estimated acceleration factor between the usage stress level and a higher accelerated stress level, which makes an important contribution to reliability demonstration and assessment tests. - Highlights: • We establish the necessary conditions for degradation mechanism equivalence of ADTs. • We propose the M-optimality criterion for designing constant-stress ADT plans. • The M-optimality plan reduces the dispersion of the estimated accelerated factors. • An electrical connector with its stress relaxation data is used for illustration.

  17. Program management aid for redundancy selection and operational guidelines

    Science.gov (United States)

    Hodge, P. W.; Davis, W. L.; Frumkin, B.

    1972-01-01

    Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.

  18. PCA criterion for SVM (MLP) classifier for flavivirus biomarker from salivary SERS spectra at febrile stage.

    Science.gov (United States)

    Radzol, A R M; Lee, Khuan Y; Mansor, W; Omar, I S

    2016-08-01

    Non-structural protein (NS1) has been conceded as one of the biomarkers for flavivirus that causes diseases with life threatening consequences. NS1 is an antigen that allows detection of the illness at febrile stage, mostly from blood samples currently. Our work here intends to define an optimum model for PCA-SVM with MLP kernel for classification of flavivirus biomarker, NS1 molecule, from SERS spectra of saliva, which to the best of our knowledge has never been explored. Since performance of the model depends on the PCA criterion and MLP parameters, both are examined in tandem. Input vector to classifier determined by each PCA criterion is subjected to brute force tuning of MLP parameters for entirety. Its performance is also compared to our previous works where a Linear and RBF kernel are used. It is found that the best PCA-SVM (MLP) model can be defined by 5 PCs from Cattel's Scree test for PCA, together with P1 and P2 values of 0.1 and -0.2 respectively, with a classification performance of [96.9%, 93.8%, 100.0%].

  19. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  20. Application Of Database Program in selecting Sorghum (Sorghum bicolor L) Mutant Lines

    International Nuclear Information System (INIS)

    H, Soeranto

    2000-01-01

    Computer database software namely MSTAT and paradox have been exercised in the field of mutation breeding especially in the process of selecting plant mutant lines of sorghum. In MSTAT, selecting mutant lines can be done by activating the SELECTION function and then followed by entering mathematical formulas for the selection criterion. Another alternative is by defining the desired selection intensity to the analysis results of subprogram SORT. Including the selected plant mutant lines in BRSERIES program, it will make their progenies be easier to be traced in subsequent generations. In paradox, an application program for selecting mutant lines can be made by combining facilities of Table, form and report. Selecting mutant lines with defined selection criterion can easily be done through filtering data. As a relation database, paradox ensures that the application program for selecting mutant lines and progeny trachings, can be made easier, efficient and interactive

  1. VALIDITY OF EXCESS ENTROPY PRODUCTION CRITERION OF THERMODYNAMIC STABILITY FOR NONEQUILIBRIUM STEADY STATES

    Institute of Scientific and Technical Information of China (English)

    吴金平

    1991-01-01

    The relation between the excess entropy production criterion of thermodynamic stabilityfor nonequilibrium states and kinetic linear stability principle is discussed. It is shown thatthe condition required by the excess entropy production criterion generally is sufficient, butnot necessary to judge the system stability. The condition required by the excess entropyproduction criterion is stronger than that of the linear stability principle. Only when theproduct matrix between the linearized matrix of kinetic equations and matrix of quadraticform of second-order excess entropy is symmetric, is the condition required by the excessentropy production criterion that the steady steate is asymptotically stable (δ_xP>0) necessaryand sufficient. The counterexample given by Fox to prove that the excess entropy, (δ~2S)ss,is not a Liapunov function is incorrect. Contradictory to his conclusion, the counterexampleis just a positive one that proves that the excess entropy is a Liapunov function. Moreover,the excess entropy production criterion is not limited by symmetric conditions of the linear-ized matrix of kinetic equations. The excess entropy around nonequilibrium steady states,(δ~2S)ss, is a Liapunov function of thermodynamic system.

  2. A model for persistency of egg production

    NARCIS (Netherlands)

    Grossman, M.; Gossman, T.N.; Koops, W.J.

    2000-01-01

    The objectives of our study were to propose a new definition for persistency of egg production and to develop a mathematical model to describe the egg production curve, one that includes a new measure for persistency, based on the proposed definition, for use as a selection criterion to improve

  3. The Mercier Criterion in Reversed Shear Tokamak Plasmas

    International Nuclear Information System (INIS)

    Kessel, C.; Chance, M.S.; Jardin, S.C.

    1999-01-01

    A recent numerical study has found that, contrary to conventional theoretical and experimental expectations, reversed shear plasmas are unstable primarily because the term proportional to the shear in the Mercier criterion is destabilizing. In the present study, the role of the magnetic shear, both local and global, is examined for various tokamak configurations with monotonic and non-monotonic safety factor profiles. The enhancement of the local shear due to the outward shift of the magnetic axis suggests that the latter are less susceptible to interchanges. Furthermore, by regrouping the terms in the criterion, the V'' term when differentiated instead with respect to the toroidal flux, is shown to absorb the dominant shear term. No Mercier instability is found for similar profiles as in the previous study

  4. Criterion of independence applied to personnel responsible for in-house verification

    International Nuclear Information System (INIS)

    Pavaux, F.

    1982-01-01

    Framatome's experience has shown that one of the most difficult criteria to interpret in applying quality assurance programmes is that of ''organization''. In particular, this requires that personnel responsible for in-house verification should have ''sufficient independence''. The author examines how Framatome interprets the criterion of sufficient independence. It may seem easy to deal with this problem on paper, by redistributing the boxes of the organizational chart, but to do so is both unrealistic and deceptive; the development of reference models runs into difficulties when it comes to practical application and these difficulties alone justify trying another approach to the problem. The method advocated here consists in analysing each situation as it arises, taking into account the criterion in question, and disregarding any pre-defined model or reference situation. The analysis should involve all quality assurance functions and not, as is too often the case, only the independence of the quality assurance service. The analysis should also examine organizational freedom and independence from direct pressures of cost and schedule considerations. To support this recommendation, three standard cases are described (manufacturing control, design verification, on-site inspection team) which demonstrate how these criteria can give rise to different difficulties in different cases. The author concludes that, in contrast to other criteria so often applied by successive approximations, organizational changes should only be decided upon when absolutely necessary and after a detailed analysis of the particular case in question has been performed. (author)

  5. Extended equal areas criterion: foundations and applications

    Energy Technology Data Exchange (ETDEWEB)

    Yusheng, Xue [Nanjim Automation Research Institute, Nanjim (China)

    1994-12-31

    The extended equal area criterion (EEAC) provides analytical expressions for ultra fast transient stability assessment, flexible sensitivity analysis, and means to preventive and emergency controls. Its outstanding performances have been demonstrated by thousands upon thousands simulations on more than 50 real power systems and by on-line operation records in an EMS environment of Northeast China Power System since September 1992. However, the researchers have mainly based on heuristics and simulations. This paper lays a theoretical foundation of EEAC and brings to light the mechanism of transient stability. It proves true that the dynamic EEAC furnishes a necessary and sufficient condition for stability of multi machine systems with any detailed models, in the sense of the integration accuracy. This establishes a new platform for further advancing EEAC and better understanding of problems. An overview of EEAC applications in China is also given in this paper. (author) 30 refs.

  6. Portfolio selection with heavy tails

    NARCIS (Netherlands)

    Hyung, N.; Vries, de C.G.

    2007-01-01

    Consider the portfolio problem of choosing the mix between stocks and bonds under a downside risk constraint. Typically stock returns exhibit fatter tails than bonds corresponding to their greater downside risk. Downside risk criteria like the safety first criterion therefore often select corner

  7. The Criterion A problem revisited: controversies and challenges in defining and measuring psychological trauma.

    Science.gov (United States)

    Weathers, Frank W; Keane, Terence M

    2007-04-01

    The Criterion A problem in the field of traumatic stress refers to the stressor criterion for posttraumatic stress disorder (PTSD) and involves a number of fundamental issues regarding the definition and measurement of psychological trauma. These issues first emerged with the introduction of PTSD as a diagnostic category in the Diagnostic and Statistical Manual of Mental Disorders, Third Edition (DSM-III; American Psychiatric Association, 1980) and continue to generate considerable controversy. In this article, the authors provide an update on the Criterion A problem, with particular emphasis on the evolution of the DSM definition of the stressor criterion and the ongoing debate regarding broad versus narrow conceptualizations of traumatic events.

  8. MOCK OBSERVATIONS OF BLUE STRAGGLERS IN GLOBULAR CLUSTER MODELS

    International Nuclear Information System (INIS)

    Sills, Alison; Glebbeek, Evert; Chatterjee, Sourav; Rasio, Frederic A.

    2013-01-01

    We created artificial color-magnitude diagrams of Monte Carlo dynamical models of globular clusters and then used observational methods to determine the number of blue stragglers in those clusters. We compared these blue stragglers to various cluster properties, mimicking work that has been done for blue stragglers in Milky Way globular clusters to determine the dominant formation mechanism(s) of this unusual stellar population. We find that a mass-based prescription for selecting blue stragglers will select approximately twice as many blue stragglers than a selection criterion that was developed for observations of real clusters. However, the two numbers of blue stragglers are well-correlated, so either selection criterion can be used to characterize the blue straggler population of a cluster. We confirm previous results that the simplified prescription for the evolution of a collision or merger product in the BSE code overestimates their lifetimes. We show that our model blue stragglers follow similar trends with cluster properties (core mass, binary fraction, total mass, collision rate) as the true Milky Way blue stragglers as long as we restrict ourselves to model clusters with an initial binary fraction higher than 5%. We also show that, in contrast to earlier work, the number of blue stragglers in the cluster core does have a weak dependence on the collisional parameter Γ in both our models and in Milky Way globular clusters

  9. A simple criterion for determining the static friction force between nanowires and flat substrates using the most-bent-state method.

    Science.gov (United States)

    Hou, Lizhen; Wang, Shiliang; Huang, Han

    2015-04-24

    A simple criterion was developed to assess the appropriateness of the currently available models that estimate the static friction force between nanowires and substrates using the 'most-bent-state' method. Our experimental testing of the static friction force between Al2O3 nanowires and Si substrate verified our theoretical analysis, as well as the establishment of the criterion. It was found that the models are valid only for the bent nanowires with the ratio of wire length over the minimum curvature radius [Formula: see text] no greater than 1. For the cases with [Formula: see text] greater than 1, the static friction force was overestimated as it neglected the effect of its tangential component.

  10. A Path-Independent Forming Limit Criterion for Stamping Simulations

    International Nuclear Information System (INIS)

    Zhu Xinhai; Chappuis, Laurent; Xia, Z. Cedric

    2005-01-01

    Forming Limit Diagram (FLD) has been proved to be a powerful tool for assessing necking failures in sheet metal forming analysis for majority of stamping operations over the last three decades. However, experimental evidence and theoretical analysis suggest that its applications are limited to linear or almost linear strain paths during its deformation history. Abrupt changes or even gradual deviations from linear strain-paths will shift forming limit curves from their original values, a situation that occurs in vast majority of sequential stamping operations such as where the drawing process is followed by flanging and re-strike processes. Various forming limit models have been put forward recently to provide remedies for the problem, noticeably stress-based and strain gradient-based forming limit criteria. This study presents an alternative path-independent forming limit criterion. Instead of traditional Forming Limit Diagrams (FLD) which are constructed in terms of major - minor principal strains throughout deformation history, the new criterion defines a critical effective strain ε-bar* as the limit strain for necking, and it is shown that ε-bar* can be expressed as a function of current strain rate state and material work hardening properties, without the need of explicitly considering strain-path effects. It is given by ε-bar* = f(β, k, n) where β = (dε 2 /dε 1 ) at current deformation state, and k and n are material strain hardening parameters if a power law is assumed. The analysis is built upon previous work by Storen and Rice [1975] and Zhu et al [2002] with the incorporation of anisotropic yield models such as Hill'48 for quadratic orthotropic yield and Hill'79 for non-quadratic orthotropic yield. Effects of anisotropic parameters such as R-values and exponent n-values on necking are investigated in detail for a variety of strain paths. Results predicted according to current analysis are compared against experimental data gathered from literature

  11. The role of decision criterion in the Deese-Roediger-McDermott (DRM) false recognition memory: False memory falls and rises as a function of restriction on criterion setting.

    Science.gov (United States)

    Jou, Jerwen; Escamilla, Eric E; Arredondo, Mario L; Pena, Liann; Zuniga, Richard; Perez, Martin; Garcia, Clarissa

    2018-02-01

    How much of the Deese-Roediger-McDermott (DRM) false memory is attributable to decision criterion is so far a controversial issue. Previous studies typically used explicit warnings against accepting the critical lure to investigate this issue. The assumption is that if the false memory results from using a liberally biased criterion, it should be greatly reduced or eliminated by an explicit warning against accepting the critical lure. Results showed that warning was generally ineffective. We asked the question of whether subjects can substantially reduce false recognition without being warned when the test forces them to make a distinction between true and false memories. Using a two-alternative forced choice in which criterion plays a relatively smaller role, we showed that subjects could indeed greatly reduce the rate of false recognition. However, when the forced-choice restriction was removed from the two-item choice test, the rate of false recognition rebounded to that of the hit for studied list words, indicating the role of criterion in false recognition.

  12. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  13. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  14. Temperature profile retrieval in axisymmetric combustion plumes using multilayer perceptron modeling and spectral feature selection in the infrared CO2 emission band.

    Science.gov (United States)

    García-Cuesta, Esteban; de Castro, Antonio J; Galván, Inés M; López, Fernando

    2014-01-01

    In this work, a methodology based on the combined use of a multilayer perceptron model fed using selected spectral information is presented to invert the radiative transfer equation (RTE) and to recover the spatial temperature profile inside an axisymmetric flame. The spectral information is provided by the measurement of the infrared CO2 emission band in the 3-5 μm spectral region. A guided spectral feature selection was carried out using a joint criterion of principal component analysis and a priori physical knowledge of the radiative problem. After applying this guided feature selection, a subset of 17 wavenumbers was selected. The proposed methodology was applied over synthetic scenarios. Also, an experimental validation was carried out by measuring the spectral emission of the exhaust hot gas plume in a microjet engine with a Fourier transform-based spectroradiometer. Temperatures retrieved using the proposed methodology were compared with classical thermocouple measurements, showing a good agreement between them. Results obtained using the proposed methodology are very promising and can encourage the use of sensor systems based on the spectral measurement of the CO2 emission band in the 3-5 μm spectral window to monitor combustion processes in a nonintrusive way.

  15. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  16. Analytical Solution of Tunnel Surrounding Rock for Stress and Displacement Based on Lade–Duncan Criterion

    Directory of Open Access Journals (Sweden)

    MingZheng Zhu

    2018-01-01

    Full Text Available The deformation and failure of tunnel surrounding rock is the result of tunnel excavation disturbance and rock stress release. When the local stress of surrounding rock exceeds the elastic limit of rock mass, the plastic analysis of surrounding rock must be carried out to judge the stability of tunnel. In this study, the Lade–Duncan yield criterion is used to calculate the analytic solutions for the surrounding rock in a tunnel, and the radius and displacement of the plastic zone are deduced using an equilibrium equation. The plastic zone radius and displacement based on Lade–Duncan criterion and Mohr–Coulomb criterion were compared by using single-factor analysis method under the different internal friction angles, in situ stresses, and support resistances. The results show that the solutions of the radius and displacement of plastic zone calculated by the Lade–Duncan criterion are close to those of Mohr–Coulomb criterion under the high internal friction angle and support resistance or low in situ rock stress; however, the radius and displacement of the plastic zone calculated by the Lade–Duncan criterion are larger under normal circumstances, and the Lade–Duncan criterion is more applicable to the stability analysis of the surrounding rock in a tunnel.

  17. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  18. Multiuser hybrid switched-selection diversity systems

    KAUST Repository

    Shaqfeh, Mohammad

    2011-09-01

    A new multiuser scheduling scheme is proposed and analyzed in this paper. The proposed system combines features of conventional full-feedback selection-based diversity systems and reduced-feedback switch-based diversity systems. The new hybrid system provides flexibility in trading-off the channel information feedback overhead with the prospected multiuser diversity gains. The users are clustered into groups, and the users\\' groups are ordered into a sequence. Per-group feedback thresholds are used and optimized to maximize the system overall achievable rate. The proposed hybrid system applies switched diversity criterion to choose one of the groups, and a selection criterion to decide the user to be scheduled from the chosen group. Numerical results demonstrate that the system capacity increases as the number of users per group increases, but at the cost of more required feedback messages. © 2011 IEEE.

  19. PENGAMBILAN KEPUTUSAN UNTUK PEMILIHAN SUPPLIER BAHAN BAKU DENGAN PENDEKATAN ANALYTIC HIERARCHY PROCESSDI PR PAHALA SIDOARJO

    Directory of Open Access Journals (Sweden)

    Miftakhul Jannah

    2016-11-01

    Full Text Available Choosing right raw material supplier(s is essential to reassure raw material of good quality. There are several approcahes being used for supplier selection, including method of selection and evaluation system, and models. This study aimed at evaluating and developing raw material supplier selection method in PR Pahala, Sidoarjo, Indonesia, a cigarrette company. The model used in this study was QCDFR (Quality, Cost, Delivery, Flexibility, Responsiveness model. Several stages employed during development of the model were: establishing criterion, performance indicator of supplier of each criterion, alternative, criterion weight, performance indicator of supplier alternative and, making of spread sheet and reporting of evaluation election of supplier. There are several main suppliers of dried tobacco to be evaluated in this study, which come from four different areas, namely Madura, Bondowoso, Tulungagung, and Malang. The model developed should be able to assisst the company in the supplier selection to achieve supplier of the best performance.The most important criterion factor of supplier selection in PR Pahala is criterion of quality which had highest weight of 0,373, followed by cost of 0,266, criterion of responsivenessof 0,156, criterion of delivery of 0,128, criterion of flexibility of 0,077. Moreover, weight of selection alternative according to area were Madura with 0,311, Tulungagung 0,234, Bondowoso 0,253, Malang 0,202.

  20. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    Science.gov (United States)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  1. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  2. Jet pairing algorithm for the 6-jet Higgs channel via energy chi-square criterion

    International Nuclear Information System (INIS)

    Magallanes, J.B.; Arogancia, D.C.; Gooc, H.C.; Vicente, I.C.M.; Bacala, A.M.; Miyamoto, A.; Fujii, K.

    2002-01-01

    Study and discovery of the Higgs bosons at JLC (Joint Linear Collider) is one of the tasks of ACFA (Asian Committee for future Accelerators)-JLC Group. The mode of Higgs production at JLC is e + e - → Z 0 H 0 . In this paper, studies are concentrated on the Higgsstrahlung process and the selection of its signals by getting the right jet-pairing algorithm of 6-jet final state at 300 GeV assuming that Higgs boson mass is 120 GeV and luminosity is 500 fb -1 . The total decay width Γ (H 0 → all) and the efficiency of the signals at the JLC are studied utilizing the 6-jet channel. Out of the 91,500 Higgsstrahlung events, 4,174 6-jet events are selected. PYTHIA Monte Carlo Generator generates the 6-jet Higgsstrahlung channel according to the Standard Model. The generated events are then simulated by Quick Simulator using the JCL parameters. After tagging all 6 quarks which correspond to the 6-jet final state of the Higgsstrahlung, the mean energy of the Z, H, and W's are obtained. Having calculated these information, the event energy chi-square is defined and it is found that the correct combination have generally smaller value. This criterion can be used to find correct jet-pairing algorithm and as one of the cuts for the background signals later on. Other chi-definitions are also proposed. (S. Funahashi)

  3. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  4. A critical analysis of the Mises stress criterion used in frequency domain fatigue life prediction

    Directory of Open Access Journals (Sweden)

    Adam Niesłony

    2016-10-01

    Full Text Available Multiaxial fatigue failure criteria are formulated in time and frequency domain. The number of frequency domain criteria is rather small and the most popular one is the equivalent von Mises stress criterion. This criterion was elaborated by Preumont and Piefort on the basis of well-known von Mises stress concept, first proposed by Huber in 1907, and well accepted by the scientific community and engineers. It is important to know, that the criterion was developed to determine the yield stress and material effort under static load. Therefore the direct use of equivalent von Mises stress criterion for fatigue life prediction can lead to some incorrectness of theoretical and practical nature. In the present study four aspects were discussed: influence of the value of fatigue strength of tension and torsion, lack of parallelism of the SN curves, abnormal behaviour of the criterion under biaxial tensioncompression and influence of phase shift between particular stress state components. Information contained in this article will help to prevent improper use of this criterion and contributes to its better understanding

  5. Information criterion for the categorization quality evaluation

    Directory of Open Access Journals (Sweden)

    Michail V. Svirkin

    2011-05-01

    Full Text Available The paper considers the possibility of using the variation of information function as a quality criterion for categorizing a collection of documents. The performance of the variation of information function is being examined subject to the number of categories and the sample volume of the test document collection.

  6. Bell's theorem based on a generalized EPR criterion of reality

    International Nuclear Information System (INIS)

    Eberhard, P.H.; Rosselet, P.

    1995-01-01

    First, the demonstration of Bell's theorem, i.e., of the nonlocal character of quantum theory, is spelled out using the EPR criterion of reality as premises and a gedanken experiment involving two particles. Then, the EPR criterion is extended to include quantities predicted almost with certainty, and Bell's theorem is demonstrated on these new premises. The same experiment is used but in conditions that become possible in real life, without the requirements of ideal efficiencies and zero background. Very high efficiencies and low background are needed, but these requirements may be met in the future

  7. Wind power forecast using wavelet neural network trained by improved Clonal selection algorithm

    International Nuclear Information System (INIS)

    Chitsaz, Hamed; Amjady, Nima; Zareipour, Hamidreza

    2015-01-01

    Highlights: • Presenting a Morlet wavelet neural network for wind power forecasting. • Proposing improved Clonal selection algorithm for training the model. • Applying Maximum Correntropy Criterion to evaluate the training performance. • Extensive testing of the proposed wind power forecast method on real-world data. - Abstract: With the integration of wind farms into electric power grids, an accurate wind power prediction is becoming increasingly important for the operation of these power plants. In this paper, a new forecasting engine for wind power prediction is proposed. The proposed engine has the structure of Wavelet Neural Network (WNN) with the activation functions of the hidden neurons constructed based on multi-dimensional Morlet wavelets. This forecast engine is trained by a new improved Clonal selection algorithm, which optimizes the free parameters of the WNN for wind power prediction. Furthermore, Maximum Correntropy Criterion (MCC) has been utilized instead of Mean Squared Error as the error measure in training phase of the forecasting model. The proposed wind power forecaster is tested with real-world hourly data of system level wind power generation in Alberta, Canada. In order to demonstrate the efficiency of the proposed method, it is compared with several other wind power forecast techniques. The obtained results confirm the validity of the developed approach

  8. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  9. Application of mixed models for the assessment genotype and ...

    African Journals Online (AJOL)

    Application of mixed models for the assessment genotype and environment interactions in cotton ( Gossypium hirsutum ) cultivars in Mozambique. ... The cultivars ISA 205, STAM 42 and REMU 40 showed superior productivity when they were selected by the Harmonic Mean of Genotypic Values (HMGV) criterion in relation ...

  10. Model selection in Bayesian segmentation of multiple DNA alignments.

    Science.gov (United States)

    Oldmeadow, Christopher; Keith, Jonathan M

    2011-03-01

    The analysis of multiple sequence alignments is allowing researchers to glean valuable insights into evolution, as well as identify genomic regions that may be functional, or discover novel classes of functional elements. Understanding the distribution of conservation levels that constitutes the evolutionary landscape is crucial to distinguishing functional regions from non-functional. Recent evidence suggests that a binary classification of evolutionary rates is inappropriate for this purpose and finds only highly conserved functional elements. Given that the distribution of evolutionary rates is multi-modal, determining the number of modes is of paramount concern. Through simulation, we evaluate the performance of a number of information criterion approaches derived from MCMC simulations in determining the dimension of a model. We utilize a deviance information criterion (DIC) approximation that is more robust than the approximations from other information criteria, and show our information criteria approximations do not produce superfluous modes when estimating conservation distributions under a variety of circumstances. We analyse the distribution of conservation for a multiple alignment comprising four primate species and mouse, and repeat this on two additional multiple alignments of similar species. We find evidence of six distinct classes of evolutionary rates that appear to be robust to the species used. Source code and data are available at http://dl.dropbox.com/u/477240/changept.zip.

  11. QV modal distance displacement - a criterion for contingency ranking

    Energy Technology Data Exchange (ETDEWEB)

    Rios, M.A.; Sanchez, J.L.; Zapata, C.J. [Universidad de Los Andes (Colombia). Dept. of Electrical Engineering], Emails: mrios@uniandes.edu.co, josesan@uniandes.edu.co, cjzapata@utp.edu.co

    2009-07-01

    This paper proposes a new methodology using concepts of fast decoupled load flow, modal analysis and ranking of contingencies, where the impact of each contingency is measured hourly taking into account the influence of each contingency over the mathematical model of the system, i.e. the Jacobian Matrix. This method computes the displacement of the reduced Jacobian Matrix eigenvalues used in voltage stability analysis, as a criterion of contingency ranking, considering the fact that the lowest eigenvalue in the normal operation condition is not the same lowest eigenvalue in N-1 contingency condition. It is made using all branches in the system and specific branches according to the IBPF index. The test system used is the IEEE 118 nodes. (author)

  12. Optimal Conformal Polynomial Projections for Croatia According to the Airy/Jordan Criterion

    Directory of Open Access Journals (Sweden)

    Dražen Tutić

    2009-05-01

    Full Text Available The paper describes optimal conformal polynomial projections for Croatia according to the Airy/Jordan criterion. A brief introduction of history and theory of conformal mapping is followed by descriptions of conformal polynomial projections and their current application. The paper considers polynomials of degrees 1 to 10. Since there are conditions in which the 1st degree polynomial becomes the famous Mercator projection, it was not considered specifically for Croatian territory. The area of Croatia was defined as a union of national territory and the continental shelf. Area definition data were taken from the Euro Global Map 1:1 000 000 for Croatia, as well as from two maritime delimitation treaties. Such an irregular area was approximated with a regular grid consisting of 11 934 ellipsoidal trapezoids 2' large. The Airy/Jordan criterion for the optimal projection is defined as minimum of weighted mean of Airy/Jordan measure of distortion in points. The value of the Airy/Jordan criterion is calculated from all 11 934 centres of ellipsoidal trapezoids, while the weights are equal to areas of corresponding ellipsoidal trapezoids. The minimum is obtained by Nelder and Mead’s method, as implemented in the fminsearch function of the MATLAB package. Maps of Croatia representing the distribution of distortions are given for polynomial degrees 2 to 6 and 10. Increasing the polynomial degree results in better projections considering the criterion, and the 6th degree polynomial provides a good ratio of formula complexity and criterion value.

  13. Electronics. Criterion-Referenced Test (CRT) Item Bank.

    Science.gov (United States)

    Davis, Diane, Ed.

    This document contains 519 criterion-referenced multiple choice and true or false test items for a course in electronics. The test item bank is designed to work with both the Vocational Instructional Management System (VIMS) and the Vocational Administrative Management System (VAMS) in Missouri. The items are grouped into 15 units covering the…

  14. T-S Fuzzy Model-Based Approximation and Filter Design for Stochastic Time-Delay Systems with Hankel Norm Criterion

    Directory of Open Access Journals (Sweden)

    Yanhui Li

    2014-01-01

    Full Text Available This paper investigates the Hankel norm filter design problem for stochastic time-delay systems, which are represented by Takagi-Sugeno (T-S fuzzy model. Motivated by the parallel distributed compensation (PDC technique, a novel filtering error system is established. The objective is to design a suitable filter that guarantees the corresponding filtering error system to be mean-square asymptotically stable and to have a specified Hankel norm performance level γ. Based on the Lyapunov stability theory and the Itô differential rule, the Hankel norm criterion is first established by adopting the integral inequality method, which can make some useful efforts in reducing conservativeness. The Hankel norm filtering problem is casted into a convex optimization problem with a convex linearization approach, which expresses all the conditions for the existence of admissible Hankel norm filter as standard linear matrix inequalities (LMIs. The effectiveness of the proposed method is demonstrated via a numerical example.

  15. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  16. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  17. Satisfying the Einstein-Podolsky-Rosen criterion with massive particles

    Science.gov (United States)

    Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.

    2016-03-01

    In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.

  18. The Bohm Criterion for Radiofrequency Discharges - a Numerical Verification Based on Poisson Equation

    NARCIS (Netherlands)

    Meijer, P. M.; W. J. Goedheer,

    1993-01-01

    Recently it was shown that, by using the analysis of electrostatic waves entering the plasma-sheath edge, the direct-current (dc) Bohm criterion also holds for discharges under radio-frequency (rf) conditions. In this paper, the influence of Bohm's criterion on the sheath characteristics for

  19. An Integrated Pruning Criterion for Ensemble Learning Based on Classification Accuracy and Diversity

    DEFF Research Database (Denmark)

    Fu, Bin; Wang, Zhihai; Pan, Rong

    2013-01-01

    be further considered while designing a pruning criterion is presented, and then an effective definition of diversity is proposed. The experimental results have validated that the given pruning criterion could single out the subset of classifiers that show better performance in the process of hill...

  20. A PWR PCI failure criterion to burnups of 60 GW·d/t using the ENIGMA code

    International Nuclear Information System (INIS)

    Clarke, A.P.; Tempest, P.A.; Shea, J.H.

    2000-01-01

    A fuel performance modelling code (ENIGMA) has been used to analyse the empirical PCI failure criterion in terms of a clad failure stress as a function of burnup and fast neutron dose. The Studsvik database has been analysed. Results indicate a rising and then saturating failure stress with burnup and fast neutron dose. Using the PCI failure limits, equivalent to 95/95 confidence limits, an ENIGMA stress-based methodology is used to derive PWR PCI failure limits up to 60 GW·d/t U using a conservative assumption that the failure stress does not increase at high burnup and neutron dose. In addition experimental ramp data on gadolinia-doped fuel rods do not indicate any increased susceptibility to PCI failure implying that the UO 2 criterion can be used for gadolinia doped fuel. (author)

  1. A design-based approximation to the Bayes Information Criterion in finite population sampling

    Directory of Open Access Journals (Sweden)

    Enrico Fabrizi

    2014-05-01

    Full Text Available In this article, various issues related to the implementation of the usual Bayesian Information Criterion (BIC are critically examined in the context of modelling a finite population. A suitable design-based approximation to the BIC is proposed in order to avoid the derivation of the exact likelihood of the sample which is often very complex in a finite population sampling. The approximation is justified using a theoretical argument and a Monte Carlo simulation study.

  2. Evidence for the Criterion Validity and Clinical Utility of the Pathological Narcissism Inventory

    Science.gov (United States)

    Thomas, Katherine M.; Wright, Aidan G. C.; Lukowitsky, Mark R.; Donnellan, M. Brent; Hopwood, Christopher J.

    2012-01-01

    In this study, the authors evaluated aspects of criterion validity and clinical utility of the grandiosity and vulnerability components of the Pathological Narcissism Inventory (PNI) using two undergraduate samples (N = 299 and 500). Criterion validity was assessed by evaluating the correlations of narcissistic grandiosity and narcissistic…

  3. Botanical Criterions of Quchan Baharkish pastureland in Khorasan ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Botanical Criterions of Quchan Baharkish pastureland in Khorasan Razavi Province,. Iran. *1SAEED, JAHEDI POUR 2ALIREZA, KOOCHEKI 3MEHDI, NASSIRI. MAHALLATI 4PARVIZ REZVANI MOGHADDAM. 1Department of Agroecology and Plant Breeding, Ferdowsi. University of Mashhad International Campus, ...

  4. On translational superfluidity and the Landau criterion for Bose gases in the Gross-Pitaevski limit

    International Nuclear Information System (INIS)

    Wreszinski, Walter F

    2008-01-01

    The two-fluid and Landau criteria for superfluidity are compared for trapped Bose gases. While the two-fluid criterion predicts translational superfluidity, it is suggested, on the basis of the homogeneous Gross-Pitaevski limit, that a necessary part of Landau's criterion, adequate for non-translationally invariant systems, does not hold for trapped Bose gases in the GP limit. As a consequence, if the compressibility is detected to be very large (infinite by experimental standards), the two-fluid criterion is seen to be the relevant one in case the system is a translational superfluid, while the Landau criterion is the relevant one if translational superfluidity is absent. (fast track communication)

  5. Application of Dang Van criterion to rolling contact fatigue in wind turbine roller bearings under elastohydrodynamic lubrication conditions

    DEFF Research Database (Denmark)

    Cerullo, Michele

    2014-01-01

    classic Hertzian and elastohydrodynamic lubrication theories have been used to model the pressure distribution acting on the inner raceway and results are compared according to the Dang Van multiaxial fatigue criterion. The contact on the bearing raceway is simulated by substituting the roller...

  6. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  7. Low Carbon Supplier Selection in the Hotel Industry

    Directory of Open Access Journals (Sweden)

    Chia-Wei Hsu

    2014-05-01

    Full Text Available This study presents a model for evaluating the carbon and energy management performance of suppliers by using multiple-criteria decision-making (MCDM. By conducting a literature review and gathering expert opinions, 10 criteria on carbon and energy performance were identified to evaluate low carbon suppliers using the Fuzzy Delphi Method (FDM. Subsequently, the decision-making trial and evaluation laboratory (DEMATEL method was used to determine the importance of evaluation criteria in selecting suppliers and the causal relationships between them. The DEMATEL-based analytic network process (DANP and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR were adopted to evaluate the weights and performances of suppliers and to obtain a solution under each evaluation criterion. An illustrative example of a hotel company was presented to demonstrate how to select a low carbon supplier according to carbon and energy management. The proposed hybrid model can help firms become effective in facilitating low carbon supply chains in hotels.

  8. Bell's theorem based on a generalized EPR criterion of reality

    International Nuclear Information System (INIS)

    Eberhard, P.H.; Rosselet, P.

    1993-04-01

    First, the demonstration of Bell's theorem, i.e. of the non-local character of quantum theory, is spelled out using the EPR criterion of reality as premises and a gedanken experiment involving two particles. Then, the EPR criterion is extended to include quantities predicted almost with certainty, and Bell's theorem is demonstrated on these new premises. The same experiment is used but in conditions that become possible in real life, without the requirements of ideal efficiencies and zero background. Very high efficiencies and low background are needed, but these requirements may be met in the future. (author) 1 fig., 11 refs

  9. Direct and correlated responses to selection for total weight of lamb ...

    African Journals Online (AJOL)

    The estimated selection responses indicate that direct selection for TWW would be the most suitable selection criterion for improving reproductive performance in flocks with a high reproduction rate where an increase in the number of lambs would be undesirable. (South African Journal of Animal Science, 2001, 31(2): ...

  10. Changing the criterion for memory conformity in free recall and recognition.

    Science.gov (United States)

    Wright, Daniel B; Gabbert, Fiona; Memon, Amina; London, Kamala

    2008-02-01

    People's responses during memory studies are affected by what other people say. This memory conformity effect has been shown in both free recall and recognition. Here we examine whether accurate, inaccurate, and suggested answers are affected similarly when the response criterion is varied. In the first study, participants saw four pictures of detailed scenes and then discussed the content of these scenes with another participant who saw the same scenes, but with a couple of details changed. Participants were either told to recall everything they could and not to worry about making mistakes (lenient), or only to recall items if they were sure that they were accurate (strict). The strict instructions reduced the amount of inaccurate information reported that the other person suggested, but also reduced the number of accurate details recalled. In the second study, participants were shown a large set of faces and then their memory recognition was tested with a confederate on these and fillers. Here also, the criterion manipulation shifted both accurate and inaccurate responses, and those suggested by the confederate. The results are largely consistent with a shift in response criterion affecting accurate, inaccurate, and suggested information. In addition we varied the level of secrecy in the participants' responses. The effects of secrecy were complex and depended on the level of response criterion. Implications for interviewing eyewitnesses and line-ups are discussed.

  11. A PRISMA-Driven Systematic Review of Predictive Equations for Assessing Fat and Fat-Free Mass in Healthy Children and Adolescents Using Multicomponent Molecular Models as the Reference Method

    Directory of Open Access Journals (Sweden)

    Analiza M. Silva

    2013-01-01

    Full Text Available Simple methods to assess both fat (FM and fat-free mass (FFM are required in paediatric populations. Several bioelectrical impedance instruments (BIAs and anthropometric equations have been developed using different criterion methods (multicomponent models for assessing FM and FFM. Through childhood, FFM density increases while FFM hydration decreases until reaching adult values. Therefore, multicomponent models should be used as the gold standard method for developing simple techniques because two-compartment models (2C model rely on the assumed adult values of FFM density and hydration (1.1 g/cm3 and 73.2%, respectively. This study will review BIA and/or anthropometric-based equations for assessing body composition in paediatric populations. We reviewed English language articles from MEDLINE (1985–2012 with the selection of predictive equations developed for assessing FM and FFM using three-compartment (3C and 4C models as criterion. Search terms included children, adolescent, childhood, adolescence, 4C model, 3C model, multicomponent model, equation, prediction, DXA, BIA, resistance, anthropometry, skinfold, FM, and FFM. A total of 14 studies (33 equations were selected with the majority developed using DXA as the criterion method with a limited number of studies providing cross-validation results. Overall, the selected equations are useful for epidemiological studies, but some concerns still arise on an individual basis.

  12. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  13. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  14. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  15. Analytical criterion for shock ignition of fusion reaction in hot spot

    International Nuclear Information System (INIS)

    Ribeyre, X.; Tikhonchuk, V. T.; Breil, J.; Lafon, M.; Vallet, A.; Bel, E. L.

    2013-01-01

    Shock ignition of DT capsules involves two major steps. First, the fuel is assembled by means of a low velocity conventional implosion. At stagnation, the central core has a temperature lower than the one needed for ignition. Then a second, strong spherical converging shock, launched from a high intensity laser spike, arrives to the core. This shock crosses the core, rebounds at the target center and increases the central pressure to the ignition conditions. In this work we consider this latter phase by using the Guderley self-similar solution for converging flows. Our model accounts for the fusion reaction energy deposition, thermal and radiation losses thus describing the basic physics of hot spot ignition. The ignition criterion derived from the analytical model is successfully compared with full scale hydrodynamic simulations. (authors)

  16. A hybrid method for information technology selection combining multi-criteria decision making (MCDM) with technology roadmapping

    OpenAIRE

    García Mejía, Jaime Andrés

    2013-01-01

    Abstract: Strategic information technology (IT) management has been recognized as vital for achieving competitive advantage. IT selection, the process of choosing the best technology alternative from a number of available options, is an important part of IT management. The IT selection is a multi-criteria decision making process, where relative importance of each criterion is determined and the degree of satisfaction of every criterion from each alternative is evaluated. Decision makers (DMs)...

  17. Modelling and numerical simulation of liquid-vapor phase transitions

    International Nuclear Information System (INIS)

    Caro, F.

    2004-11-01

    This work deals with the modelling and numerical simulation of liquid-vapor phase transition phenomena. The study is divided into two part: first we investigate phase transition phenomena with a Van Der Waals equation of state (non monotonic equation of state), then we adopt an alternative approach with two equations of state. In the first part, we study the classical viscous criteria for selecting weak solutions of the system used when the equation of state is non monotonic. Those criteria do not select physical solutions and therefore we focus a more recent criterion: the visco-capillary criterion. We use this criterion to exactly solve the Riemann problem (which imposes solving an algebraic scalar non linear equation). Unfortunately, this step is quite costly in term of CPU which prevent from using this method as a ground for building Godunov solvers. That is why we propose an alternative approach two equations of state. Using the least action principle, we propose a phase changing two-phase flow model which is based on the second thermodynamic principle. We shall then describe two equilibrium submodels issued from the relaxations processes when instantaneous equilibrium is assumed. Despite the weak hyperbolicity of the last sub-model, we propose stable numerical schemes based on a two-step strategy involving a convective step followed by a relaxation step. We show the ability of the system to simulate vapor bubbles nucleation. (author)

  18. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  19. Carbon emissions and an equitable emission reduction criterion

    International Nuclear Information System (INIS)

    Golomb, Dan

    1999-01-01

    In 1995 the world-wide carbon emissions reached 5.8 billion metric tonnes per year (GTC/y). The Kyoto protocol calls for a reduction of carbon emissions from the developed countries (Annex I countries) of 6-8% below 1990 levels on the average, and unspecified commitments for the less developed (non-Annex I) countries. It is doubtful that the Kyoto agreement will be ratified by some parliaments, especially the USA Congress. Furthermore, it is shown that if the non-Annex I countries will not curtail their carbon emissions drastically, the global emissions will soar to huge levels by the middle of the next century. An equitable emission criterion is proposed which may lead to a sustainable rate of growth of carbon emissions, and be acceptable to all countries of the world. The criterion links the rate of growth of carbon emissions to the rate of growth of the Gross Domestic Product (GDP). A target criterion is proposed R = 0.15 KgC/SGDP, which is the current average for western European countries and Japan. This allows for both the growth of the GDP and carbon emissions. However, to reach the target in a reasonable time, the countries for which R≤ 0.3 would be allowed a carbon emission growth rate of 1%./y, and countries for which R≥ 0.3, 0.75%/y. It is shown that by 2050 the world-wide carbon emissions would reach about 10 GTC/y, which is about 3 times less than the Kyoto agreement would allow. (Author)

  20. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    Science.gov (United States)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  1. PTSD's risky behavior criterion: Relation with DSM-5 PTSD symptom clusters and psychopathology.

    Science.gov (United States)

    Contractor, Ateka A; Weiss, Nicole H; Dranger, Paula; Ruggero, Camilo; Armour, Cherie

    2017-06-01

    A new symptom criterion of reckless and self-destructive behaviors (E2) was recently added to posttraumatic stress disorder's (PTSD) diagnostic criteria in DSM-5, which is unsurprising given the well-established relation between PTSD and risky behaviors. Researchers have questioned the significance and incremental validity of this symptom criterion within PTSD's symptomatology. Unprecedented to our knowledge, we aim to compare trauma-exposed groups differing on their endorsement status of the risky behavior symptom on several psychopathology constructs (PTSD, depression, distress tolerance, rumination, anger). The sample included 123 trauma-exposed participants seeking mental health treatment (M age=35.70; 68.30% female) who completed self-report questionnaires assessing PTSD symptoms, depression, rumination, distress tolerance, and anger. Results of independent samples t-tests indicated that participants who endorsed the E2 criterion at a clinically significant level reported significantly greater PTSD subscale severity; depression severity; rumination facets of repetitive thoughts, counterfactual thinking, and problem-focused thinking; and anger reactions; and significantly less absorption and regulation (distress tolerance facets) compared to participants who did not endorse the E2 criterion at a clinically significant level. Results indicate the utility of the E2 criterion in identifying trauma-exposed individual with greater posttraumatic distress, and emphasize the importance of targeting such behaviors in treatment. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  2. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  3. Prioritising transport infrastructure projects: towards a multi-criterion ...

    African Journals Online (AJOL)

    Kirstam

    multi-criterion analysis (MCA), partial equilibrium analysis, project appraisal ... In the case of transport infrastructure projects, though, this is no mean ... vehicle ownership and mileage-based depreciation (an improved road network and/ .... urban, rural or regional development initiatives – they typically include one or more.

  4. Utilization of Durability Criterion to Develop Automotive Components

    DEFF Research Database (Denmark)

    Ricardo, Luiz Carlos Hernandes

    2010-01-01

    Today the automotive companies must reduce the time to development of new products with improvement in performance, durability and low cost reductions where possible. To achieve this goal the carmakers need to improve the design criterion of car systems like body, chassis and suspension component...

  5. On the gap-opening criterion of migrating planets in protoplanetary disks

    OpenAIRE

    Malik, Matej; Meru, Farzana; Mayer, Lucio; Meyer, Michael R.

    2015-01-01

    We perform two-dimensional hydrodynamical simulations to quantitatively explore the torque balance criterion for gap-opening (as formulated by Crida et al.) in a variety of disks when considering a migrating planet. We find that even when the criterion is satisfied, there are instances when planets still do not open gaps. We stress that gap-opening is not only dependent on whether a planet has the ability to open a gap, but whether it can do so quickly enough. This can be expressed as an addi...

  6. A second perspective on the Amann–Schmiedl–Seifert criterion for non-equilibrium in a three-state system

    International Nuclear Information System (INIS)

    Jia, Chen; Chen, Yong

    2015-01-01

    In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann–Schmiedl–Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS. (paper)

  7. On global stability criterion for neural networks with discrete and distributed delays

    International Nuclear Information System (INIS)

    Park, Ju H.

    2006-01-01

    Based on the Lyapunov functional stability analysis for differential equations and the linear matrix inequality (LMI) optimization approach, a new delay-dependent criterion for neural networks with discrete and distributed delays is derived to guarantee global asymptotic stability. The criterion is expressed in terms of LMIs, which can be solved easily by various convex optimization algorithms. Some numerical examples are given to show the effectiveness of proposed method

  8. A novel delay-dependent criterion for delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lee, S.M.; Kwon, O.M.; Park, Ju H.

    2010-01-01

    This Letter considers a robust stability analysis method for delayed neural networks of neutral type. By constructing a new Lyapunov functional, a novel delay-dependent criterion for the stability is derived in terms of LMIs (linear matrix inequalities). A less conservative stability criterion is derived by using nonlinear properties of the activation function of the neural networks. Two numerical examples are illustrated to show the effectiveness of the proposed method.

  9. Spreading Sequence Design for Multiple Cell Synchronous DS-CDMA Systems under Total Weighted Squared Correlation Criterion

    Directory of Open Access Journals (Sweden)

    Cotae Paul

    2004-01-01

    Full Text Available An algorithm for designing spreading sequences for an overloaded multicellular synchronous DS-CDMA system on uplink is introduced. The criterion used to measure the optimality of the design is the total weighted square correlation (TWSC assuming the channel state information known perfectly at both transmitter and receiver. By using this algorithm it is possible to obtain orthogonal generalized WBE sequences sets for any processing gain. The bandwidth of initial generalized WBE signals of each cell is preserved in the extended signal space associated to multicellular system. Mathematical formalism is illustrated by selected numerical examples.

  10. Criterion-based laparoscopic training reduces total training time

    NARCIS (Netherlands)

    Brinkman, W.M.; Buzink, S.N.; Alevizos, L.; De Hingh, I.H.J.T.; Jakimowicz, J.J.

    2011-01-01

    The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. Methods During four training sessions within 1 week (one session per day) 34 medical interns

  11. A Generalized Evolution Criterion in Nonequilibrium Convective Systems

    Science.gov (United States)

    Ichiyanagi, Masakazu; Nisizima, Kunisuke

    1989-04-01

    A general evolution criterion, applicable to transport processes such as the conduction of heat and mass diffusion, is obtained as a direct version of the Le Chatelier-Braun principle for stationary states. The present theory is not based on any radical departure from the conventional one. The generalized theory is made determinate by proposing the balance equations for extensive thermodynamic variables which will reflect the character of convective systems under the assumption of local equilibrium. As a consequence of the introduction of source terms in the balance equations, there appear additional terms in the expression of the local entropy production, which are bilinear in terms of the intensive variables and the sources. In the present paper, we show that we can construct a dissipation function for such general cases, in which the premises of the Glansdorff-Prigogine theory are accumulated. The new dissipation function permits us to formulate a generalized evolution criterion for convective systems.

  12. Refocusing criterion via sparsity measurements in digital holography.

    Science.gov (United States)

    Memmolo, Pasquale; Paturzo, Melania; Javidi, Bahram; Netti, Paolo A; Ferraro, Pietro

    2014-08-15

    Several automatic approaches have been proposed in the past to compute the refocus distance in digital holography (DH). However most of them are based on a maximization or minimization of a suitable amplitude image contrast measure, regarded as a function of the reconstruction distance parameter. Here we show that, by using the sparsity measure coefficient regarded as a refocusing criterion in the holographic reconstruction, it is possible to recover the focus plane and, at the same time, establish the degree of sparsity of digital holograms, when samples of the diffraction Fresnel propagation integral are used as a sparse signal representation. We employ a sparsity measurement coefficient known as Gini's index thus showing for the first time, to the best of our knowledge, its application in DH, as an effective refocusing criterion. Demonstration is provided for different holographic configurations (i.e., lens and lensless apparatus) and for completely different objects (i.e., a thin pure phase microscopic object as an in vitro cell, and macroscopic puppets) preparation.

  13. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  14. Application of the MIT two-channel model to predict flow recirculation in WARD 61-pin blanket tests

    International Nuclear Information System (INIS)

    Huang, T.T.; Todreas, N.E.

    1983-01-01

    The preliminary application of MIT two-channel model to WARD sodium blanket tests was presented in this report. The criterion was employed to predict the recirculation for selected completed (transient and steady state) and proposed (transient only) tests. The heat loss was correlated from the results of the WARD zero power tests. The calculational results show that the criterion agrees with the WARD tests except for WARD RUN 718 for which the criterion predicts a different result from WARD data under bundle heat loss condition. However, if the test assembly is adiabatic, the calculations predict an operating point which is marginally close to the mixed-to-recirculation transition regime

  15. Application of the MIT two-channel model to predict flow recirculation in WARD 61-pin blanket tests

    International Nuclear Information System (INIS)

    Huang, T.T.; Todreas, N.E.

    1983-01-01

    The preliminary application of MIT TWO-CHANNEL MODEL to WARD sodium blanket tests was presented in this report. Our criterion was employed to predict the recirculation for selected completed (transient and steady state) and proposed (transient only) tests. The heat loss was correlated from the results of the WARD zero power tests. The calculational results show that our criterion agrees with the WARD tests except for WARD RUN 718 for which the criterion predicts a different result from WARD data under bundle heat loss condition. However, if the test assembly is adiabatic, the calculations predict an operating point which is marginally close to the mixed-to-recirculation transition regime

  16. Development of a stress-induced martensitic transformation criterion for a Cu–Al–Be polycrystalline shape memory alloy undergoing uniaxial tension

    International Nuclear Information System (INIS)

    García-Castillo, F.N.; Cortés-Pérez, J.; Amigó, V.; Sánchez-Arévalo, F.M.; Lara-Rodríguez, G.A.

    2015-01-01

    This study presents a criterion for predicting the martensitic variants (MVs) that appear during the stress-induced martensitic transformation (SIMT) in a polycrystalline sample of Cu–11.5% wt. Al–0.5% wt. Be under simple tension. Our criterion is based on crystallographic parameters, such as the crystal orientation and Schmid factor (SF). The displacement vector fields (DVFs) were obtained in the observation system by a mathematical model and were used to distort the boundary of a set of grains. From the DVF, the strain tensor for each grain was obtained, and the strain ratio (SR) in the observation system was calculated. Electron backscattering diffraction (EBSD) measurements were performed to determine the crystal orientation of the grains. The inverse SF was used to determine the in-plane stress transformation diagrams (STDs) for each studied grain. The combination of a balance criterion (BC) and STD provided a criterion that allowed us to predict the possible order of stress-induced MVs formed as a function of the crystal orientation and thermomechanical parameters of the shape memory alloy (SMA) with higher accuracy than when using the criteria separately. To validate our criteria, we tested other researchers’ published results. Our results were in agreement and were capable of predicting the stress-induced MVs in a polycrystalline SMA

  17. Development of a brittle fracture acceptance criterion for the International Atomic Energy Agency (IAEA)

    International Nuclear Information System (INIS)

    Sorenson, K.B.; Salzbrenner, R.; Nickell, R.E.

    1992-01-01

    An effort has been undertaken to develop a brittle fracture acceptance criterion for structural components of nuclear material transportation casks. The need for such a criterion was twofold. First, new generation cask designs have proposed the use of ferritic steels and other materials to replace the austenitic stainless steel commonly used for structural components in transport casks. Unlike austenitic stainless steel which fails in a high-energy absorbing, ductile tearing mode, it is possible for these candidate materials to fail via brittle fracture when subjected to certain combinations of elevated loading rates and low temperatures. Second, there is no established brittle fracture criterion accepted by the regulatory community that covers a broad range of structural materials. Although the existing IAEA Safety Series number-sign 37 addressed brittle fracture, its the guidance was dated and pertained only to ferritic steels. Consultant's Services Meetings held under the auspices of the IAEA have resulted in a recommended brittle fracture criterion. The brittle fracture criterion is based on linear elastic fracture mechanics, and is the result of a consensus of experts from six participating IAEA-member countries. The brittle fracture criterion allows three approaches to determine the fracture toughness of the structural material. The three approaches present the opportunity to balance material testing requirements and the conservatism of the material's fracture toughness which must be used to demonstrate resistance to brittle fracture. This work has resulted in a revised Appendix IX to Safety Series number-sign 37 which will be released as an IAEA Technical Document within the coming year

  18. Development of a site-specific water quality criterion for hexavalent chromium

    International Nuclear Information System (INIS)

    McIntyre, D.O.; Sticko, J.P.; Reash, R.J.

    1995-01-01

    The effluent of treated fly ash from a coal-fired power plant located on the Ohio River periodically exceeds its NPDES acute permit limit for hexavalent chromium of 15 microg/L. The increased levels of hexavalent chromium in the effluent are a recent occurrence which are likely due to changes in coal blends burned in the generating units. Ohio EPA determined the use designation of the receiving stream (Limited Resource Water) was being attained and a one-year biomonitoring program of the effluent detected no acute toxicity to Ceriodaphnia dubia or Daphnia magna. The water-effect ratio (WER) procedure was selected to develop a site-specific criterion maximum concentration for hexavalent chromium for the effluent's receiving stream. WER procedures followed those described in EPA's ''Interim Guidance on Determination and Use of Water-Effect Ratios for Metals'' (1994). Site water used in the WER determinations was undiluted effluent since the receiving stream originates at the discharge point of the outfall. 48-hour acute D. magna and 96-hour acute fathead minnow toxicity tests were selected as the primary and secondary tests, respectively for use in three seasonal WER determinations. The results of the three WER determinations and the status of the regulatory process will be presented

  19. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we impl...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed.......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  20. Systems interaction and single failure criterion

    International Nuclear Information System (INIS)

    1981-01-01

    This report documents the results of a six-month study to evaluate the ongoing research programs of the U.S. Nuclear Regulatory Commission (NRC) and U.S. commercial nuclear station owners which address the safety significance of systems interaction and the regulatory adequacy of the single failure criterion. The evaluation of system interactions provided is the initial phase of a more detailed study leading to the development and application of methodology for quantifying the relative safety of operating nuclear plants. (Auth.)

  1. Generalized melting criterion for beam-induced amorphization

    International Nuclear Information System (INIS)

    Lam, N. Q.; Okamoto, Paul R.

    1993-09-01

    Recent studies have shown that the mean-square static atomic displacements provide a generic measure of the enthalpy stored in the lattice in the form of chemical and topological disorder, and that the effect of the displacements on the softening of shear elastic constants is identical to that of heating. This finding lends support to a generalized form of the Lindemann phenomenological melting criterion and leads to a natural interpretion of crystalline-to-amorphous transformations as defect-induced melting of metastable crystals driven beyond a critical state of disorder where the melting temperature falls below the glass-transition temperature. Application of the generalized Lindemann criterion to both the crystalline and amorphous phases indicates that the enthalpies of the two phases become identical when their shear moduli become equal. This thermo-elastic rule provides a basis for predicting the relative susceptibility of compounds to amorphization in terms of their elastic properties as measured by Debye temperatures. The present approach can explain many of the basic findings on beam-induced amorphization of intermetallic compounds as well as amorphous phase formation associated with ion implantation, ion-beam mixing and other solid-state processes

  2. Short-Cut Estimators of Criterion-Referenced Test Consistency.

    Science.gov (United States)

    Brown, James Dean

    1990-01-01

    Presents simplified methods for deriving estimates of the consistency of criterion-referenced, English-as-a-Second-Language tests, including (1) the threshold loss agreement approach using agreement or kappa coefficients, (2) the squared-error loss agreement approach using the phi(lambda) dependability approach, and (3) the domain score…

  3. A new criterion of photostimulated luminescence (PSL) method to detect irradiated traditional Chinese medicinal herbs

    International Nuclear Information System (INIS)

    Zhang, Liwen; Lin, Tong; Jiang, Yingqiao; Bi, Fujun

    2013-01-01

    This work used a new criterion to analyze 162 varieties (222 batches) of traditional Chinese medicinal herbs based on the European Standard EN 13751 (2009. Foodstuffs—Detection of Irradiated Food Using Photostimulated Luminescence. European Committee for Standardization, Brussels, Belgium). The characteristics of PSL signals are described, and a new criterion is established. Compared to EN 13751, the new criterion uses clearer definition to evaluate instead of the ambiguous descriptions in EN Standard, such as “much greater than” and “within the same order of magnitude”. Moreover, the accuracy of the new criterion is as good as or better than EN Standard in regard to classifying irradiated and non-irradiated traditional Chinese medicinal herbs. It can help to avoid false positive result when a non-irradiated herb got a screening PSL measurement above 5000 counts/60 s. This new criterion of photostimulated luminescence method can be applied to identify the irradiation status of traditional Chinese medicinal herbs, even if the medicinal herbs were irradiated at a low dose (0.3 kGy) or stored in the dark at room temperature for 24 months after the irradiation treatment. - Highlights: • Clearer evaluation criterion instead of the ambiguous descriptions in EN 13751. • Accuracy satisfied. • Large sample size provides outstanding representativeness. • Systematical evaluation on PSL method

  4. Personnel selection using group fuzzy AHP and SAW methods

    Directory of Open Access Journals (Sweden)

    Ali Reza Afshari

    2017-01-01

    Full Text Available Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  5. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  6. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)

    2013-03-15

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  7. New evaluation methods for conceptual design selection using computational intelligence techniques

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua

    2013-01-01

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  8. Variation and design criterion of heat load ratio of generator for air cooled lithium bromide–water double effect absorption chiller

    International Nuclear Information System (INIS)

    Li, Zeyu; Liu, Liming; Liu, Jinping

    2016-01-01

    Highlights: • Design criterion of heat load ratio of generator is vital to system performance. • Heat load ratio of generator changes with working condition. • Change of heat load ratio of generator for four systems was obtained and compared. • Design criterion of heat load ratio of generator was presented. - Abstract: The heat load ratio of generator (HLRG) is a special system parameter because it is not fixed at the design value but changes with the working condition. For the air cooled chiller, the deviation from the design working condition occurs easily due to the variation of the surrounding temperature. The system is likely to suffer from crystallization when the working condition is different from the designed one if the HLRG is designed improperly. Consequently, the design criterion of HLRG based on a broad range of working condition is essential and urgent to the development of air cooled lithium bromide–water double effect absorption chiller. This paper mainly deals with the variation of HLRG with the working condition as well as corresponding design criterion. Four types of double effect chillers named series, pre-parallel, rear parallel and reverse parallel flow system were considered. The parametric model was developed by the introduction of a new thermodynamic relationship of generator. The change of HLRG for different types of chillers with the working condition was analyzed and compared. The corresponding design criterion of HLRG was presented. This paper is helpful for further improvement of the performance and reliability of air cooled lithium bromide–water double effect absorption chiller.

  9. Selecting the Number of Principal Components in Functional Data

    KAUST Repository

    Li, Yehua

    2013-12-01

    Functional principal component analysis (FPCA) has become the most widely used dimension reduction tool for functional data analysis. We consider functional data measured at random, subject-specific time points, contaminated with measurement error, allowing for both sparse and dense functional data, and propose novel information criteria to select the number of principal component in such data. We propose a Bayesian information criterion based on marginal modeling that can consistently select the number of principal components for both sparse and dense functional data. For dense functional data, we also develop an Akaike information criterion based on the expected Kullback-Leibler information under a Gaussian assumption. In connecting with the time series literature, we also consider a class of information criteria proposed for factor analysis of multivariate time series and show that they are still consistent for dense functional data, if a prescribed undersmoothing scheme is undertaken in the FPCA algorithm. We perform intensive simulation studies and show that the proposed information criteria vastly outperform existing methods for this type of data. Surprisingly, our empirical evidence shows that our information criteria proposed for dense functional data also perform well for sparse functional data. An empirical example using colon carcinogenesis data is also provided to illustrate the results. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  10. A new risk-based screening criterion for treatment-demanding retinopathy of prematurity in Denmark

    DEFF Research Database (Denmark)

    Slidsborg, Carina; Forman, Julie Lyng; Rasmussen, Steen Christian

    2011-01-01

    The aim of this study was to uncover the most effective and safe criterion to implement for retinopathy of prematurity screening in Denmark.......The aim of this study was to uncover the most effective and safe criterion to implement for retinopathy of prematurity screening in Denmark....

  11. Energy-efficient induction motors designing with application of a modified criterion of reduced costs

    Directory of Open Access Journals (Sweden)

    V.S. Petrushin

    2014-03-01

    Full Text Available The paper introduces a modified criterion of reduced costs that employs coefficients of operation significance and priority of ohmic loss accounting to allow matching maximum efficiency with minimum reduced costs. Impact of the inflation factor on the criterion of reduced costs is analyzed.

  12. PID controller auto-tuning based on process step response and damping optimum criterion.

    Science.gov (United States)

    Pavković, Danijel; Polak, Siniša; Zorc, Davor

    2014-01-01

    This paper presents a novel method of PID controller tuning suitable for higher-order aperiodic processes and aimed at step response-based auto-tuning applications. The PID controller tuning is based on the identification of so-called n-th order lag (PTn) process model and application of damping optimum criterion, thus facilitating straightforward algebraic rules for the adjustment of both the closed-loop response speed and damping. The PTn model identification is based on the process step response, wherein the PTn model parameters are evaluated in a novel manner from the process step response equivalent dead-time and lag time constant. The effectiveness of the proposed PTn model parameter estimation procedure and the related damping optimum-based PID controller auto-tuning have been verified by means of extensive computer simulations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  14. Elaboration of generalized criterion for zonality determination of the Chernobyl' NPP working spaces

    International Nuclear Information System (INIS)

    Simakov, A.V.; Bad'in, V.I.; Nosovskij, A.V.

    1992-01-01

    Analysis of the features of radioactive dose rating, regularities of their formation and dosimetry allows suggesting generalized criterion for assess zonality of compartments and territories, combining all factors and their action on operators. This criterion may be used during design of the new objects, development of programs and pursuance of work on removal of atomic power plants from operation. 6 refs.; 1 fig

  15. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me...

  16. Conflicting views on a neutrality criterion for radioactive-waste management

    International Nuclear Information System (INIS)

    Bodde, D.L.; Cochran, T.B.

    1981-01-01

    Public debate over the management of radioactive wastes illustrates the moral dilemma of intergenerational justice. Because of low priority, there has been no permanent disposal of high-level radioactive wastes or decontamination and decommisioning of reactors. The problem is now receiving public attention because of the near depletion of temporary storage capacity, the deferral of reprocessing, and concerns for the safe transport and disposal of hazardous materials. Two authors examine the criterion of neutrality in which the risks of radioactive wastes can be balanced by the risks future generations would face without the opportunity for nuclear power. They disagree, however, in whether the model can possibly represent the real world and whether that risk is a significant consideration. 27 references, 1 figure

  17. Synchronization criterion for Lur'e type complex dynamical networks with time-varying delay

    International Nuclear Information System (INIS)

    Ji, D.H.; Park, Ju H.; Yoo, W.J.; Won, S.C.; Lee, S.M.

    2010-01-01

    In this Letter, the synchronization problem for a class of complex dynamical networks in which every identical node is a Lur'e system with time-varying delay is considered. A delay-dependent synchronization criterion is derived for the synchronization of complex dynamical network that represented by Lur'e system with sector restricted nonlinearities. The derived criterion is a sufficient condition for absolute stability of error dynamics between the each nodes and the isolated node. Using a convex representation of the nonlinearity for error dynamics, the stability condition based on the discretized Lyapunov-Krasovskii functional is obtained via LMI formulation. The proposed delay-dependent synchronization criterion is less conservative than the existing ones. The effectiveness of our work is verified through numerical examples.

  18. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  19. A criterion and mechanism for power ramp defects

    International Nuclear Information System (INIS)

    Garlick, A.; Gravenor, J.G.

    1978-02-01

    The problem of power ramp defects in water reactor fuel pins is discussed in relation to results recently obtained from ramp experiments in the Steam Generating Heavy Water Reactor. Cladding cracks in the defected fuel pins were similar, both macro- and micro structurally, to those in unirradiated Zircaloy exposed to iodine stress-corrosion cracking (scc) conditions. Furthermore, when the measured stress levels for scc in short-term tests were taken as a criterion for ramp defects, UK fuel modelling codes were found to give a useful indication of defect probability under reactor service conditions. The likelihood of sticking between fuel and cladding is discussed and evidence presented which suggests that even at power a degree of adhesion may be expected in some fuel pins. The ramp defect mechanism is discussed in terms of fission product scc, initiation being by intergranular penetration and propagation by cleavage when suitably orientated grains are exposed to large dilatational stresses ahead of the main crack. (author)

  20. Jeans' criterion and nonextensive velocity distribution function in kinetic theory

    International Nuclear Information System (INIS)

    Du Jiulin

    2004-01-01

    The effect of nonextensivity of self-gravitating systems on the Jeans' criterion for gravitational instability is studied in the framework of Tsallis statistics. The nonextensivity is introduced in the Jeans problem by a generalized q-nonextensive velocity distribution function through the equation of state of ideal gas in nonextensive kinetic theory. A new Jeans' criterion is deduced with a factor √(2/(5-3q)) that, however, differs from that one in [Astron. Astrophys. 396 (2002) 309] and new results of gravitational instability are analyzed for the nonextensive parameter q. An understanding of physical meaning of q and a possible seismic observation to find astronomical evidence for a value of q different from unity are also discussed

  1. Establishment of an equivalence acceptance criterion for accelerated stability studies.

    Science.gov (United States)

    Burdick, Richard K; Sidor, Leslie

    2013-01-01

    In this article, the use of statistical equivalence testing for providing evidence of process comparability in an accelerated stability study is advocated over the use of a test of differences. The objective of such a study is to demonstrate comparability by showing that the stability profiles under nonrecommended storage conditions of two processes are equivalent. Because it is difficult at accelerated conditions to find a direct link to product specifications, and hence product safety and efficacy, an equivalence acceptance criterion is proposed that is based on the statistical concept of effect size. As with all statistical tests of equivalence, it is important to collect input from appropriate subject-matter experts when defining the acceptance criterion.

  2. Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors and under-estimation (type 2 errors of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement.

  3. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  4. Multicanonical simulation of the Domb-Joyce model and the Gō model: new enumeration methods for self-avoiding walks

    International Nuclear Information System (INIS)

    Shirai, Nobu C; Kikuchi, Macoto

    2013-01-01

    We develop statistical enumeration methods for self-avoiding walks using a powerful sampling technique called the multicanonical Monte Carlo method. Using these methods, we estimate the numbers of the two dimensional N-step self-avoiding walks up to N = 256 with statistical errors. The developed methods are based on statistical mechanical models of paths which include self-avoiding walks. The criterion for selecting a suitable model for enumerating self-avoiding walks is whether or not the configuration space of the model includes a set for which the number of the elements can be exactly counted. We call this set a scale fixing set. We selected the following two models which satisfy the criterion: the Gō model for lattice proteins and the Domb-Joyce model for generalized random walks. There is a contrast between these two models in the structures of the configuration space. The configuration space of the Gō model is defined as the universal set of self-avoiding walks, and the set of the ground state conformation provides a scale fixing set. On the other hand, the configuration space of the Domb-Joyce model is defined as the universal set of random walks which can be used as a scale fixing set, and the set of the ground state conformation is the same as the universal set of self-avoiding walks. From the perspective of enumeration performance, we conclude that the Domb-Joyce model is the better of the two. The reason for the performance difference is partly explained by the existence of the first-order phase transition of the Gō model

  5. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  6. Exclusion as a Criterion for Selecting Socially Vulnerable Population Groups

    Directory of Open Access Journals (Sweden)

    Aleksandra Anatol’evna Shabunova

    2016-05-01

    Full Text Available The article considers theoretical aspects of a scientific research “The Mechanisms for Overcoming Mental Barriers of Inclusion of Socially Vulnerable Categories of the Population for the Purpose of Intensifying Modernization in the Regional Community” (RSF grant No. 16-18-00078. The authors analyze the essence of the category of “socially vulnerable groups” from the legal, economic and sociological perspectives. The paper shows that the economic approach that uses the criterion “the level of income and accumulated assets” when defining vulnerable population groups prevails in public administration practice. The legal field of the category based on the economic approach is defined by the concept of “the poor and socially unprotected categories of citizens”. With the help of the analysis of theoretical and methodological aspects of this issue, the authors show that these criteria are a necessary but not sufficient condition for classifying the population as being socially vulnerable. Foreign literature associates the phenomenon of vulnerability with the concept of risks, with the possibility of households responding to them and with the likelihood of losing the well-being (poverty theory; research areas related to the means of subsistence, etc.. The asset-based approaches relate vulnerability to the poverty that arises due to lack of access to tangible and intangible assets. Sociological theories presented by the concept of social exclusion pay much attention to the breakdown of social ties as a source of vulnerability. The essence of social exclusion consists in the inability of people to participate in important aspects of social life (in politics, labor markets, education and healthcare, cultural life, etc. though they have all the rights to do so. The difference between the concepts of exclusion and poverty is manifested in the displacement of emphasis from income inequality to limited access to rights. Social exclusion is

  7. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  8. A simple criterion to predict the glass forming ability of metallic alloys

    International Nuclear Information System (INIS)

    Falcao de Oliveira, Marcelo

    2012-01-01

    A new and simple criterion with which to quantitatively predict the glass forming ability (GFA) of metallic alloys is proposed. It was found that the critical cooling rate for glass formation (R C ) correlates well with a proper combination of two factors, the minimum topological instability (λ min ) and the Δh parameter, which depends on the average work function difference (Δφ) and the average electron density difference (Δn ws 1/3 ) among the constituent elements of the alloy. A correlation coefficient (R 2 ) of 0.76 was found between R c and the new criterion for 68 alloys in 30 different metallic systems. The new criterion and the Uhlmann's approach were used to estimate the critical amorphous thickness (Z C ) of alloys in the Cu-Zr system. The new criterion underestimated R C in the Cu-Zr system, producing predicted Z C values larger than those observed experimentally. However, when considering a scale factor, a remarkable similarity was observed between the predicted and the experimental behavior of the GFA in the binary Cu-Zr. When using the same scale factor and performing the calculation for the ternary Zr-Cu-Al, good agreement was found between the predicted and the actual best GFA region, as well as between the expected and the observed critical amorphous thickness.

  9. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  10. Specific strain work as a failure criterion in plane stress state

    International Nuclear Information System (INIS)

    Zuchowski, R.; Zietkowski, L.

    1985-01-01

    An experimental verification of failure criterion based on specific strain work was performed. Thin-walled cylindrical specimens were examined by loading with constant force and constant torque moment, assuming different values for particular tests, at the same time keeping stress intensity constant, and by subjecting to thermal cycling. It was found that the critical value of failure did not depend on axial-to-shearing stresses ratio, i.e., on the type of state of stress. Thereby, the validity of the analysed failure criterion in plane stress was confirmed. Besides, a simple description of damage development in plane stress was suggested. (orig./RF)

  11. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    Science.gov (United States)

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  12. Satisfying the Einstein-Podolsky-Rosen criterion with massive particles

    DEFF Research Database (Denmark)

    Peise, Jan; Kruse, I.; Lange, K.

    2016-01-01

    In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully...

  13. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  14. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  15. A comparison of the criterion validity of popular measures of narcissism and narcissistic personality disorder via the use of expert ratings.

    Science.gov (United States)

    Miller, Joshua D; McCain, Jessica; Lynam, Donald R; Few, Lauren R; Gentile, Brittany; MacKillop, James; Campbell, W Keith

    2014-09-01

    The growing interest in the study of narcissism has resulted in the development of a number of assessment instruments that manifest only modest to moderate convergence. The present studies adjudicate among these measures with regard to criterion validity. In the 1st study, we compared multiple narcissism measures to expert consensus ratings of the personality traits associated with narcissistic personality disorder (NPD; Study 1; N = 98 community participants receiving psychological/psychiatric treatment) according to the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) using 5-factor model traits as well as the traits associated with the pathological trait model according to the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; American Psychiatric Association, 2013). In Study 2 (N = 274 undergraduates), we tested the criterion validity of an even larger set of narcissism instruments by examining their relations with measures of general and pathological personality, as well as psychopathology, and compared the resultant correlations to the correlations expected by experts for measures of grandiose and vulnerable narcissism. Across studies, the grandiose dimensions from the Five-Factor Narcissism Inventory (FFNI; Glover, Miller, Lynam, Crego, & Widiger, 2012) and the Narcissistic Personality Inventory (Raskin & Terry, 1988) provided the strongest match to expert ratings of DSM-IV-TR NPD and grandiose narcissism, whereas the vulnerable dimensions of the FFNI and the Pathological Narcissism Inventory (Pincus et al., 2009), as well as the Hypersensitive Narcissism Scale (Hendin & Cheek, 1997), provided the best match to expert ratings of vulnerable narcissism. These results should help guide researchers toward the selection of narcissism instruments that are most well suited to capturing different aspects of narcissism. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. VENDOR SELECTION AND DETERMINING PROCUREMENT QUOTAS IN CONDITIONS WHEN DISCOUNTS ARE OFFERED ON THE TOTAL VALUE OF THE CONTRACTED PROCUREMENT OF MANY DIFFERENT PRODUCTS

    Directory of Open Access Journals (Sweden)

    Zoran Babic

    2012-04-01

    Full Text Available Vendor selection is a very significant business problem for ensuring the competitiveness on the market. That is why companies pay great attention to this problem. To solve the vendor selection problems can be applied to a number of quantitative methods. Depending on the goals of the company the vendor selection can be a mono-criterion or multi-criterion programming problem. This paper deals with the problem of vendor selection and determining procurement quotas from selected vendors under conditions where vendors offer discounts to the total order value within a specified period where the buyer buys more products from the vendors. The total value of procurement costs in a given period is taken as an optimization criterion. In this paper the specific flour purchase problem is solved for a company that manufactures bakery products

  17. VENDOR SELECTION AND DETERMINING PROCUREMENT QUOTAS IN CONDITIONS WHEN DISCOUNTS ARE OFFERED ON THE TOTAL VALUE OF THE CONTRACTED PROCUREMENT OF MANY DIFFERENT PRODUCTS

    Directory of Open Access Journals (Sweden)

    Zoran Babić

    2012-04-01

    Full Text Available Vendor selection is a very significant business problem for ensuring the competitiveness on the market. That is why companies pay great attention to this problem. To solve the vendor selection problems can be applied to a number of quantitative methods. Depending on the goals of the company the vendor selection can be a mono-criterion or multi-criterion programming problem. This paper deals with the problem of vendor selection and determining procurement quotas from selected vendors under conditions where vendors offer discounts to the total order value within a specified period where the buyer buys more products from the vendors. The total value of procurement costs in a given period is taken as an optimization criterion. In this paper the specific flour purchase problem is solved for a company that manufactures bakery products.

  18. Oil prices. Brownian motion or mean reversion? A study using a one year ahead density forecast criterion

    International Nuclear Information System (INIS)

    Meade, Nigel

    2010-01-01

    For oil related investment appraisal, an accurate description of the evolving uncertainty in the oil price is essential. For example, when using real option theory to value an investment, a density function for the future price of oil is central to the option valuation. The literature on oil pricing offers two views. The arbitrage pricing theory literature for oil suggests geometric Brownian motion and mean reversion models. Empirically driven literature suggests ARMA-GARCH models. In addition to reflecting the volatility of the market, the density function of future prices should also incorporate the uncertainty due to price jumps, a common occurrence in the oil market. In this study, the accuracy of density forecasts for up to a year ahead is the major criterion for a comparison of a range of models of oil price behaviour, both those proposed in the literature and following from data analysis. The Kullbach Leibler information criterion is used to measure the accuracy of density forecasts. Using two crude oil price series, Brent and West Texas Intermediate (WTI) representing the US market, we demonstrate that accurate density forecasts are achievable for up to nearly two years ahead using a mixture of two Gaussians innovation processes with GARCH and no mean reversion. (author)

  19. Oil prices. Brownian motion or mean reversion? A study using a one year ahead density forecast criterion

    Energy Technology Data Exchange (ETDEWEB)

    Meade, Nigel [Imperial College, Business School London (United Kingdom)

    2010-11-15

    For oil related investment appraisal, an accurate description of the evolving uncertainty in the oil price is essential. For example, when using real option theory to value an investment, a density function for the future price of oil is central to the option valuation. The literature on oil pricing offers two views. The arbitrage pricing theory literature for oil suggests geometric Brownian motion and mean reversion models. Empirically driven literature suggests ARMA-GARCH models. In addition to reflecting the volatility of the market, the density function of future prices should also incorporate the uncertainty due to price jumps, a common occurrence in the oil market. In this study, the accuracy of density forecasts for up to a year ahead is the major criterion for a comparison of a range of models of oil price behaviour, both those proposed in the literature and following from data analysis. The Kullbach Leibler information criterion is used to measure the accuracy of density forecasts. Using two crude oil price series, Brent and West Texas Intermediate (WTI) representing the US market, we demonstrate that accurate density forecasts are achievable for up to nearly two years ahead using a mixture of two Gaussians innovation processes with GARCH and no mean reversion. (author)

  20. Job shop scheduling problem with late work criterion

    Science.gov (United States)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  1. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  2. Logical and Decisive Combining Criterion for Binary Group Decision Making

    Directory of Open Access Journals (Sweden)

    Ivan Vrana

    2010-04-01

    Full Text Available A new combining criterion, the Multiplicative Proportional Deviative Influence (MPDI is presented for combining or aggregating multi-expert numerical judgments in Yes-or-No type ill-structured group decision making situations. This newly proposed criterion performs well in comparison with the widely used aggregation means: the Arithmetic Mean (AM, and Geometric Mean (GM, especially in better reflecting the degree of agreement between criteria levels or numerical experts’ judgments. The MPDI can be considered as another class of combining criteria that make effect of the degree of agreement among multiple numerical judgments. The MPDI is applicable in integrating several collaborative or synergistic decision making systems through combining final numerical decision outputs. A discussion and generalization of the proposed MPDI is discussed withnumerical example.

  3. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  4. Stability Assessment as a Criterion of Stabilization of the Movement Trajectory of Mobile Crane Working Elements

    Science.gov (United States)

    Kacalak, W.; Budniak, Z.; Majewski, M.

    2018-02-01

    The article presents a stability assessment method of the mobile crane handling system based on the safety indicator values that were accepted as the trajectory optimization criterion. With the use of the mathematical model built and the model built in the integrated CAD/CAE environment, analyses were conducted of the displacements of the mass centre of the crane system, reactions of the outrigger system, stabilizing and overturning torques that act on the crane as well as the safety indicator values for the given movement trajectories of the crane working elements.

  5. Mass spectrometric confirmation criterion for product-ion spectra generated in flow-injection analysis. Environmental application

    NARCIS (Netherlands)

    Geerdink, R.B.; Niessen, W.M.A.; Brinkman, U.A.T.

    2001-01-01

    The suitability of a confirmation criterion recently recommended in the Netherlands for gas chromatography with mass spectrometric detection (GC-MS), was evaluated for flow-injection analysis (FIA) with atmospheric pressure chemical ionisation MS-MS detection. The main feature of the criterion is

  6. Entanglement in SU(2)-invariant quantum systems: The positive partial transpose criterion and others

    International Nuclear Information System (INIS)

    Schliemann, John

    2005-01-01

    We study entanglement in mixed bipartite quantum states which are invariant under simultaneous SU(2) transformations in both subsystems. Previous results on the behavior of such states under partial transposition are substantially extended. The spectrum of the partial transpose of a given SU(2)-invariant density matrix ρ is entirely determined by the diagonal elements of ρ in a basis of tensor-product states of both spins with respect to a common quantization axis. We construct a set of operators which act as entanglement witnesses on SU(2)-invariant states. A sufficient criterion for ρ having a negative partial transpose is derived in terms of a simple spin correlator. The same condition is a necessary criterion for the partial transpose to have the maximum number of negative eigenvalues. Moreover, we derive a series of sum rules which uniquely determine the eigenvalues of the partial transpose in terms of a system of linear equations. Finally we compare our findings with other entanglement criteria including the reduction criterion, the majorization criterion, and the recently proposed local uncertainty relations

  7. Generalized statistical criterion for distinguishing random optical groupings from physical multiple systems

    International Nuclear Information System (INIS)

    Anosova, Z.P.

    1988-01-01

    A statistical criterion is proposed for distinguishing between random and physical groupings of stars and galaxies. The criterion is applied to nearby wide multiple stars, triplets of galaxies in the list of Karachentsev, Karachentseva, and Shcherbanovskii, and double galaxies in the list of Dahari, in which the principal components are Seyfert galaxies. Systems that are almost certainly physical, probably physical, probably optical, and almost certainly optical are identified. The limiting difference between the radial velocities of the components of physical multiple galaxies is estimated

  8. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    Science.gov (United States)

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods

  9. Ginzburg criterion for ionic fluids: the effect of Coulomb interactions.

    Science.gov (United States)

    Patsahan, O

    2013-08-01

    The effect of the Coulomb interactions on the crossover between mean-field and Ising critical behavior in ionic fluids is studied using the Ginzburg criterion. We consider the charge-asymmetric primitive model supplemented by short-range attractive interactions in the vicinity of the gas-liquid critical point. The model without Coulomb interactions exhibiting typical Ising critical behavior is used to calibrate the Ginzburg temperature of the systems comprising electrostatic interactions. Using the collective variables method, we derive a microscopic-based effective Hamiltonian for the full model. We obtain explicit expressions for all the relevant Hamiltonian coefficients within the framework of the same approximation, i.e., the one-loop approximation. Then we consistently calculate the reduced Ginzburg temperature t(G) for both the purely Coulombic model (a restricted primitive model) and the purely nonionic model (a hard-sphere square-well model) as well as for the model parameters ranging between these two limiting cases. Contrary to the previous theoretical estimates, we obtain the reduced Ginzburg temperature for the purely Coulombic model to be about 20 times smaller than for the nonionic model. For the full model including both short-range and long-range interactions, we show that t(G) approaches the value found for the purely Coulombic model when the strength of the Coulomb interactions becomes sufficiently large. Our results suggest a key role of Coulomb interactions in the crossover behavior observed experimentally in ionic fluids as well as confirm the Ising-like criticality in the Coulomb-dominated ionic systems.

  10. Differentiation between Superficial and Deep Lobe Parotid Tumors by Magnetic Resonance Imaging: Usefulness of the Parotid Duct Criterion

    Energy Technology Data Exchange (ETDEWEB)

    Imaizumi, A.; Kuribayashi, A.; Okochi, K.; Yoshino, N.; Kurabayashi, T. (Oral and Maxillofacial Radiology, Graduate School, Tokyo Medical and Dental Univ., Tokyo (Japan)); Ishii, J. (Maxillofacial Surgery, Graduate School, Tokyo Medical and Dental Univ., Tokyo (Japan)); Sumi, Y. (Division of Oral and Dental Surgery, Dept. of Advanced Medicine, National Center for Geriatrics and Gerontology, Aichi (Japan))

    2009-08-15

    Background: The location of a parotid tumor affects the choice of surgery, and there is a risk of damaging the facial nerve during surgery. Thus, differentiation between superficial and deep lobe parotid tumors is important for appropriate surgical planning. Purpose: To evaluate the usefulness of using the parotid duct, in addition to the retromandibular vein, for differentiating between superficial and deep lobe parotid tumors on MR images. Material and Methods: Magnetic resonance images of 42 parotid tumors in 40 patients were reviewed to determine whether the tumor was located in the superficial or deep lobe. In each case, the retromandibular vein and the parotid duct were used to locate the tumor. The parotid duct was only used in cases where the tumor and the duct were visualized on the same image. Results: Using the retromandibular vein criterion, 71% of deep lobe and 86% of superficial lobe tumors were correctly diagnosed, providing an accuracy of 81%. However, the accuracy achieved when using the parotid duct criterion was 100%, although it could be applied to only 28 of the 42 cases. Based on these results, we defined the following diagnostic method: the parotid duct criterion is first applied, and for cases in which it cannot be applied, the retromandibular vein criterion is used. The accuracy of this method was 88%, which was better than that achieved using the retromandibular vein criterion alone. Conclusion: The parotid duct criterion is useful for determining the location of parotid tumors. Combining the parotid duct criterion with the retromandibular vein criterion might improve the diagnostic accuracy of parotid tumor location compared to using the latter criterion alone

  11. Differentiation between Superficial and Deep Lobe Parotid Tumors by Magnetic Resonance Imaging: Usefulness of the Parotid Duct Criterion

    International Nuclear Information System (INIS)

    Imaizumi, A.; Kuribayashi, A.; Okochi, K.; Yoshino, N.; Kurabayashi, T.; Ishii, J.; Sumi, Y.

    2009-01-01

    Background: The location of a parotid tumor affects the choice of surgery, and there is a risk of damaging the facial nerve during surgery. Thus, differentiation between superficial and deep lobe parotid tumors is important for appropriate surgical planning. Purpose: To evaluate the usefulness of using the parotid duct, in addition to the retromandibular vein, for differentiating between superficial and deep lobe parotid tumors on MR images. Material and Methods: Magnetic resonance images of 42 parotid tumors in 40 patients were reviewed to determine whether the tumor was located in the superficial or deep lobe. In each case, the retromandibular vein and the parotid duct were used to locate the tumor. The parotid duct was only used in cases where the tumor and the duct were visualized on the same image. Results: Using the retromandibular vein criterion, 71% of deep lobe and 86% of superficial lobe tumors were correctly diagnosed, providing an accuracy of 81%. However, the accuracy achieved when using the parotid duct criterion was 100%, although it could be applied to only 28 of the 42 cases. Based on these results, we defined the following diagnostic method: the parotid duct criterion is first applied, and for cases in which it cannot be applied, the retromandibular vein criterion is used. The accuracy of this method was 88%, which was better than that achieved using the retromandibular vein criterion alone. Conclusion: The parotid duct criterion is useful for determining the location of parotid tumors. Combining the parotid duct criterion with the retromandibular vein criterion might improve the diagnostic accuracy of parotid tumor location compared to using the latter criterion alone

  12. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  13. Functional Quality Criterion of Rock Handling Mechanization at Open-pit Mines

    Directory of Open Access Journals (Sweden)

    Voronov Yuri

    2017-01-01

    Full Text Available Overburden and mining operations at open-pit mines are performed mainly by powerful shovel-truck systems (STSs. One of the main problems of the STSs is a rather low level of their operating quality, mainly due to unjustified over-trucking. In this article, a functional criterion for assessing the qualify of the STS operation at open-pit mines is formulated, derived and analyzed. We introduce the rationale and general principles for the functional criterion formation, its general form, as well as variations for various STS structures: a mixed truck fleet and a homogeneous shovel fleet, a mixed shove! fleet and a homogeneous truck fleet, mixed truck and shovel fleets. The possibility of assessing the quality of the STS operation is of great importance for identifying the main directions for improving their operational performance and operating quality, optimizing the main performance indicators by the qualify criterion, and. as a result, for possible saving of material and technical resources for open-pit mining. Improvement of the quality of the STS operation also allows increasing the mining safety and decreasing the atmosphere pollution - by means of possible reducing of the number of the operating trucks.

  14. Functional Quality Criterion of Rock Handling Mechanization at Open-pit Mines

    Science.gov (United States)

    Voronov, Yuri; Voronov, Artyoni

    2017-11-01

    Overburden and mining operations at open-pit mines are performed mainly by powerful shovel-truck systems (STSs). One of the main problems of the STSs is a rather low level of their operating quality, mainly due to unjustified over-trucking. In this article, a functional criterion for assessing the qualify of the STS operation at open-pit mines is formulated, derived and analyzed. We introduce the rationale and general principles for the functional criterion formation, its general form, as well as variations for various STS structures: a mixed truck fleet and a homogeneous shovel fleet, a mixed shove! fleet and a homogeneous truck fleet, mixed truck and shovel fleets. The possibility of assessing the quality of the STS operation is of great importance for identifying the main directions for improving their operational performance and operating quality, optimizing the main performance indicators by the qualify criterion, and. as a result, for possible saving of material and technical resources for open-pit mining. Improvement of the quality of the STS operation also allows increasing the mining safety and decreasing the atmosphere pollution - by means of possible reducing of the number of the operating trucks.

  15. The New DSM-5 Impairment Criterion: A Challenge to Early Autism Spectrum Disorder Diagnosis?

    Science.gov (United States)

    Zander, Eric; Bölte, Sven

    2015-01-01

    The possible effect of the DSM-5 impairment criterion on diagnosing autism spectrum disorder (ASD) in young children was examined in 127 children aged 20-47 months with a DSM-IV-TR clinical consensus diagnosis of ASD. The composite score of the Vineland Adaptive Behavior Scales (VABS) served as a proxy for the DSM-5 impairment criterion. When…

  16. Local Thermodynamic Equilibrium in Laser-Induced Breakdown Spectroscopy: Beyond the McWhirter criterion

    International Nuclear Information System (INIS)

    Cristoforetti, G.; De Giacomo, A.; Dell'Aglio, M.; Legnaioli, S.; Tognoni, E.; Palleschi, V.; Omenetto, N.

    2010-01-01

    In the Laser-Induced Breakdown Spectroscopy (LIBS) technique, the existence of Local Thermodynamic Equilibrium (LTE) is the essential requisite for meaningful application of theoretical Boltzmann-Maxwell and Saha-Eggert expressions that relate fundamental plasma parameters and concentration of analyte species. The most popular criterion reported in the literature dealing with plasma diagnostics, and usually invoked as a proof of the existence of LTE in the plasma, is the McWhirter criterion [R.W.P. McWhirter, in: Eds. R.H. Huddlestone, S.L. Leonard, Plasma Diagnostic Techniques, Academic Press, New York, 1965, pp. 201-264]. However, as pointed out in several papers, this criterion is known to be a necessary but not a sufficient condition to insure LTE. The considerations reported here are meant to briefly review the theoretical analysis underlying the concept of thermodynamic equilibrium and the derivation of the McWhirter criterion, and to critically discuss its application to a transient and non-homogeneous plasma, like that created by a laser pulse on solid targets. Specific examples are given of theoretical expressions involving relaxation times and diffusion coefficients, as well as a discussion of different experimental approaches involving space and time-resolved measurements that could be used to complement a positive result of the calculation of the minimum electron number density required for LTE using the McWhirter formula. It is argued that these approaches will allow a more complete assessment of the existence of LTE and therefore permit a better quantitative result. It is suggested that the mere use of the McWhirter criterion to assess the existence of LTE in laser-induced plasmas should be discontinued.

  17. A Criterion-Referenced Approach to Student Ratings of Instruction

    Science.gov (United States)

    Meyer, J. Patrick; Doromal, Justin B.; Wei, Xiaoxin; Zhu, Shi

    2017-01-01

    We developed a criterion-referenced student rating of instruction (SRI) to facilitate formative assessment of teaching. It involves four dimensions of teaching quality that are grounded in current instructional design principles: Organization and structure, Assessment and feedback, Personal interactions, and Academic rigor. Using item response…

  18. Implementasi Perbandingan Metode Simple Additive Weighting Dengan Weighted Sum Model Dalam Pemilihan Siswa Berprestasi

    OpenAIRE

    Siregar, M. Fajrul Falah

    2015-01-01

    Good Performance Student Selection Program of MIN Tanjung Sari aims to increase students interest in learning. The selection is based on determined criterion. To assist the selection process, then a decision support system is needed. The method used is Simple Additive Weighting and Weighted Sum Model. In this research the results of both methods performed will be tested with the three periods of good performance students data possessed by MIN Tanjung Sari Medan Selayang. This s...

  19. High Voltage Overhead Power Line Routing under an Objective Observability Criterion

    Directory of Open Access Journals (Sweden)

    L. Alfredo Fernandez-Jimenez

    2017-10-01

    Full Text Available The construction of new high voltage overhead power lines (HVOPLs has become a controversial issue for electricity companies due to social opposition. Citizens are concerned about how these power lines may have an impact on their lives, basically caused by their effects on health and safety. Visual impact is one of the most easily perceived. Although there are several published works that deal with the assessment of the visual impact produced by HVOPLs, no methodology has been proposed to assess this impact from an objective perspective. This work presents an original methodology which helps to identify the optimal routes for a new HVOPL under an objective observability criterion, enabling the selection of those with the lowest visibility in a zone. The application of the proposed methodology achieves a set of routes that links new HVOPL origin and destination points creating a corridor which includes all possible routes with an observability of its towers under a threshold limit. This methodology is illustrated by a real-life use corresponding to the selection of the route with least observability for a new power line in La Rioja (Spain. The results obtained may help to achieve a consensus between key stakeholders since it is focused on the specific issues of the planned HVOPL and its observability from an objective perspective.

  20. Bayesian Information Criterion as an Alternative way of Statistical Inference

    Directory of Open Access Journals (Sweden)

    Nadejda Yu. Gubanova

    2012-05-01

    Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  1. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  4. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  5. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  6. Fatigue Assessment of Nickel-Titanium Peripheral Stents: Comparison of Multi-Axial Fatigue Models

    Science.gov (United States)

    Allegretti, Dario; Berti, Francesca; Migliavacca, Francesco; Pennati, Giancarlo; Petrini, Lorenza

    2018-02-01

    Peripheral Nickel-Titanium (NiTi) stents exploit super-elasticity to treat femoropopliteal artery atherosclerosis. The stent is subject to cyclic loads, which may lead to fatigue fracture and treatment failure. The complexity of the loading conditions and device geometry, coupled with the nonlinear material behavior, may induce multi-axial and non-proportional deformation. Finite element analysis can assess the fatigue risk, by comparing the device state of stress with the material fatigue limit. The most suitable fatigue model is not fully understood for NiTi devices, due to its complex thermo-mechanical behavior. This paper assesses the fatigue behavior of NiTi stents through computational models and experimental validation. Four different strain-based models are considered: the von Mises criterion and three critical plane models (Fatemi-Socie, Brown-Miller, and Smith-Watson-Topper models). Two stents, made of the same material with different cell geometries are manufactured, and their fatigue behavior is experimentally characterized. The comparison between experimental and numerical results highlights an overestimation of the failure risk by the von Mises criterion. On the contrary, the selected critical plane models, even if based on different damage mechanisms, give a better fatigue life estimation. Further investigations on crack propagation mechanisms of NiTi stents are required to properly select the most reliable fatigue model.

  7. Fatigue Assessment of Nickel-Titanium Peripheral Stents: Comparison of Multi-Axial Fatigue Models

    Science.gov (United States)

    Allegretti, Dario; Berti, Francesca; Migliavacca, Francesco; Pennati, Giancarlo; Petrini, Lorenza

    2018-03-01

    Peripheral Nickel-Titanium (NiTi) stents exploit super-elasticity to treat femoropopliteal artery atherosclerosis. The stent is subject to cyclic loads, which may lead to fatigue fracture and treatment failure. The complexity of the loading conditions and device geometry, coupled with the nonlinear material behavior, may induce multi-axial and non-proportional deformation. Finite element analysis can assess the fatigue risk, by comparing the device state of stress with the material fatigue limit. The most suitable fatigue model is not fully understood for NiTi devices, due to its complex thermo-mechanical behavior. This paper assesses the fatigue behavior of NiTi stents through computational models and experimental validation. Four different strain-based models are considered: the von Mises criterion and three critical plane models (Fatemi-Socie, Brown-Miller, and Smith-Watson-Topper models). Two stents, made of the same material with different cell geometries are manufactured, and their fatigue behavior is experimentally characterized. The comparison between experimental and numerical results highlights an overestimation of the failure risk by the von Mises criterion. On the contrary, the selected critical plane models, even if based on different damage mechanisms, give a better fatigue life estimation. Further investigations on crack propagation mechanisms of NiTi stents are required to properly select the most reliable fatigue model.

  8. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  9. Trait-specific long-term consequences of genomic selection in beef cattle.

    Science.gov (United States)

    de Rezende Neves, Haroldo Henrique; Carvalheiro, Roberto; de Queiroz, Sandra Aidar

    2018-02-01

    Simulation studies allow addressing consequences of selection schemes, helping to identify effective strategies to enable genetic gain and maintain genetic diversity. The aim of this study was to evaluate the long-term impact of genomic selection (GS) in genetic progress and genetic diversity of beef cattle. Forward-in-time simulation generated a population with pattern of linkage disequilibrium close to that previously reported for real beef cattle populations. Different scenarios of GS and traditional pedigree-based BLUP (PBLUP) selection were simulated for 15 generations, mimicking selection for female reproduction and meat quality. For GS scenarios, an alternative selection criterion was simulated (wGBLUP), intended to enhance long-term gains by attributing more weight to favorable alleles with low frequency. GS allowed genetic progress up to 40% greater than PBLUP, for female reproduction and meat quality. The alternative criterion wGBLUP did not increase long-term response, although allowed reducing inbreeding rates and loss of favorable alleles. The results suggest that GS outperforms PBLUP when the selected trait is under less polygenic background and that attributing more weight to low-frequency favorable alleles can reduce inbreeding rates and loss of favorable alleles in GS.

  10. A comparison of peer video modeling and self video modeling to teach textual responses in children with autism.

    Science.gov (United States)

    Marcus, Alonna; Wilder, David A

    2009-01-01

    Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism.

  11. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Directory of Open Access Journals (Sweden)

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  12. Benchmarking whole-building energy performance with multi-criteria technique for order preference by similarity to ideal solution using a selective objective-weighting approach

    International Nuclear Information System (INIS)

    Wang, Endong

    2015-01-01

    Highlights: • A TOPSIS based multi-criteria whole-building energy benchmarking is developed. • A selective objective-weighting procedure is used for a cost-accuracy tradeoff. • Results from a real case validated the benefits of the presented approach. - Abstract: This paper develops a robust multi-criteria Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based building energy efficiency benchmarking approach. The approach is explicitly selective to address multicollinearity trap due to the subjectivity in selecting energy variables by considering cost-accuracy trade-off. It objectively weights the relative importance of individual pertinent efficiency measuring criteria using either multiple linear regression or principal component analysis contingent on meta data quality. Through this approach, building energy performance is comprehensively evaluated and optimized. Simultaneously, the significant challenges associated with conventional single-criterion benchmarking models can be avoided. Together with a clustering algorithm on a three-year panel dataset, the benchmarking case of 324 single-family dwellings demonstrated an improved robustness of the presented multi-criteria benchmarking approach over the conventional single-criterion ones

  13. An Elementary Proof of a Criterion for Linear Disjointness

    Science.gov (United States)

    Dobbs, David E.

    2013-01-01

    An elementary proof using matrix theory is given for the following criterion: if "F"/"K" and "L"/"K" are field extensions, with "F" and "L" both contained in a common extension field, then "F" and "L" are linearly disjoint over "K" if (and only if) some…

  14. multivariate time series modeling of selected childhood diseases

    African Journals Online (AJOL)

    2016-06-17

    Jun 17, 2016 ... KEYWORDS: Multivariate Approach, Pre-whitening, Vector Time Series, .... Alternatively, the process may be written in mean adjusted form as .... The AIC criterion asymptotically over estimates the order with positive probability, whereas the BIC and HQC criteria ... has the same asymptotic distribution as Ǫ.

  15. Location selection of agricultural-residuals particleboard industry through group decision: The case study of northern Iran

    Directory of Open Access Journals (Sweden)

    Majid Azizi

    2016-12-01

    Full Text Available This paper presents a framework for locating agricultural-residuals particleboard industry in the northern provinces of Iran. Particleboard industry is the only Iranian wood and paper industry with an export potential and the use of agricultural residuals as the raw material can help with increasing the production in this industry, while reducing the damage to forest resources. The northern provinces of Iran are agricultural centers with ample amounts of agricultural residues. These provinces are, therefore, preferable to other provinces as the construction sites of particleboard plants. In the location selection model presented in this paper, the Analytical Hierarchy Process (AHP method is used and the results indicate that the criterion of ‘material and production’ and the sub-criterion of ‘reliability of supply’ have the highest priorities, and that Golestan province is the best alternative.

  16. Multiscale analysis of potential fields by a ridge consistency criterion: the reconstruction of the Bishop basement

    Science.gov (United States)

    Fedi, M.; Florio, G.; Cascone, L.

    2012-01-01

    We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.

  17. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  18. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  19. AUTOMATIC RECOGNITION OF FALLS IN GAIT-SLIP: A HARNESS LOAD CELL BASED CRITERION

    OpenAIRE

    Yang, Feng; Pai, Yi-Chung

    2011-01-01

    Over-head-harness systems, equipped with load cell sensors, are essential to the participants’ safety and to the outcome assessment in perturbation training. The purpose of this study was to first develop an automatic outcome recognition criterion among young adults for gait-slip training and then verify such criterion among older adults. Each of 39 young and 71 older subjects, all protected by safety harness, experienced 8 unannounced, repeated slips, while walking on a 7-m walkway. Each tri...

  20. Extending Structural Analyses of the Rosenberg Self-Esteem Scale to Consider Criterion-Related Validity: Can Composite Self-Esteem Scores Be Good Enough?

    Science.gov (United States)

    Donnellan, M Brent; Ackerman, Robert A; Brecheen, Courtney

    2016-01-01

    Although the Rosenberg Self-Esteem Scale (RSES) is the most widely used measure of global self-esteem in the literature, there are ongoing disagreements about its factor structure. This methodological debate informs how the measure should be used in substantive research. Using a sample of 1,127 college students, we test the overall fit of previously specified models for the RSES, including a newly proposed bifactor solution (McKay, Boduszek, & Harvey, 2014 ). We extend previous work by evaluating how various latent factors from these structural models are related to a set of criterion variables frequently studied in the self-esteem literature. A strict unidimensional model poorly fit the data, whereas models that accounted for correlations between negatively and positively keyed items tended to fit better. However, global factors from viable structural models had similar levels of association with criterion variables and with the pattern of results obtained with a composite global self-esteem variable calculated from observed scores. Thus, we did not find compelling evidence that different structural models had substantive implications, thereby reducing (but not eliminating) concerns about the integrity of the self-esteem literature based on overall composite scores for the RSES.

  1. A new repair criterion for steam generator tubes with axial cracks based on probabilistic integrity assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun-Su; Oh, Chang-Kyun [KEPCO Engineering and Construction Company, Inc., 269, Hyeoksin-ro, Gimcheon, Gyeongsangbuk-do 39660 (Korea, Republic of); Chang, Yoon-Suk, E-mail: yschang@khu.ac.kr [Department of Nuclear Engineering, College of Engineering, Kyung Hee University, 1732 Deokyoungdaero, Giheung, Yongin, Gyeonggi 446-701 (Korea, Republic of)

    2017-03-15

    Highlights: • Probabilistic assessment was performed for axially cracked steam generator tubes. • The threshold crack sizes were determined based on burst pressures of the tubes. • A new repair criterion was suggested as a function of operation time. - Abstract: Steam generator is one of the major components in a nuclear power plant, and it consists of thousands of thin-walled tubes. The operating record of the steam generators has indicated that a number of axial cracks due to stress corrosion have been frequently detected in the tubes. Since the tubes are closely related to the safety and also the efficiency of a nuclear power plant, an establishment of the appropriate repair criterion for the defected tubes and its applications are necessary. The objective of this paper is to develop an accurate repair criterion for the tubes with axial cracks. To do this, a thorough review is performed on the key parameters affecting the tube integrity, and then the probabilistic integrity assessment is carried out by considering the various uncertainties. In addition, the sizes of critical crack are determined by comparing the burst pressure of the cracked tube with the required performance criterion. Based on this result, the new repair criterion for the axially cracked tubes is defined from the reasonably conservative value such that the required performance criterion in terms of the burst pressure is able to be met during the next operating period.

  2. Ductile Crack Initiation Criterion with Mismatched Weld Joints Under Dynamic Loading Conditions.

    Science.gov (United States)

    An, Gyubaek; Jeong, Se-Min; Park, Jeongung

    2018-03-01

    Brittle failure of high toughness steel structures tends to occur after ductile crack initiation/propagation. Damages to steel structures were reported in the Hanshin Great Earthquake. Several brittle failures were observed in beam-to-column connection zones with geometrical discontinuity. It is widely known that triaxial stresses accelerate the ductile fracture of steels. The study examined the effects of geometrical heterogeneity and strength mismatches (both of which elevate plastic constraints due to heterogeneous plastic straining) and loading rate on critical conditions initiating ductile fracture. This involved applying the two-parameter criterion (involving equivalent plastic strain and stress triaxiality) to estimate ductile cracking for strength mismatched specimens under static and dynamic tensile loading conditions. Ductile crack initiation testing was conducted under static and dynamic loading conditions using circumferentially notched specimens (Charpy type) with/without strength mismatches. The results indicated that the condition for ductile crack initiation using the two parameter criterion was a transferable criterion to evaluate ductile crack initiation independent of the existence of strength mismatches and loading rates.

  3. The selected models of the mesostructure of composites percolation, clusters, and force fields

    CERN Document Server

    Herega, Alexander

    2018-01-01

    This book presents the role of mesostructure on the properties of composite materials. A complex percolation model is developed for the material structure containing percolation clusters of phases and interior boundaries. Modeling of technological cracks and the percolation in the Sierpinski carpet are described. The interaction of mesoscopic interior boundaries of the material, including the fractal nature of interior boundaries, the oscillatory nature of it interaction and also the stochastic model of the interior boundaries’ interaction, the genesis, structure, and properties are discussed. One of part of the book introduces the percolation model of the long-range effect which is based on the notion on the multifractal clusters with transforming elements, and the theorem on the field interaction of multifractals is described. In addition small clusters, their characteristic properties and the criterion of stability are presented.

  4. Criterion III: Maintenance of rangeland productive capacity [Chapter 4

    Science.gov (United States)

    G. R. Evans; R. A. Washmgton-Allen; R. D. Child; J. E. Mitchell; B. R. Bobowskl; R. V. Loper; B. H. Allen-Diaz; D. W. Thompson; G. R. Welling; T. B. Reuwsaat

    2010-01-01

    Maintenance of rangeland productive capacity is one of five criteria established by the Sustainable Rangelands Roundtable (SRR) to monitor and assess rangeland sustainable management. Within this criterion, six indicators were developed through the Delphi Process and the expert opinions of academicians, rangeland scientists, rangeland management agency personnel, non-...

  5. Online Identification with Reliability Criterion and State of Charge Estimation Based on a Fuzzy Adaptive Extended Kalman Filter for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongwei Deng

    2016-06-01

    Full Text Available In the field of state of charge (SOC estimation, the Kalman filter has been widely used for many years, although its performance strongly depends on the accuracy of the battery model as well as the noise covariance. The Kalman gain determines the confidence coefficient of the battery model by adjusting the weight of open circuit voltage (OCV correction, and has a strong correlation with the measurement noise covariance (R. In this paper, the online identification method is applied to acquire the real model parameters under different operation conditions. A criterion based on the OCV error is proposed to evaluate the reliability of online parameters. Besides, the equivalent circuit model produces an intrinsic model error which is dependent on the load current, and the property that a high battery current or a large current change induces a large model error can be observed. Based on the above prior knowledge, a fuzzy model is established to compensate the model error through updating R. Combining the positive strategy (i.e., online identification and negative strategy (i.e., fuzzy model, a more reliable and robust SOC estimation algorithm is proposed. The experiment results verify the proposed reliability criterion and SOC estimation method under various conditions for LiFePO4 batteries.

  6. A comparative calculation of the wind turbines capacities on the basis of the L-{sigma} criterion

    Energy Technology Data Exchange (ETDEWEB)

    Menet, Jean-Luc; Valdes, Laurent-Charles; Menart, Bruno [Universite de Valenciennes et du Hainaut-Cambresis, Groupe de Recherche Energies et Environnement, Valenciennes, 59 (France)

    2001-04-01

    Usually, wind sites are equipped with fast-running Horizontal Axis Wind Turbines of the airscrew type, which has a high efficiency. In this article, the argument is put forward that the choice of a wind turbine must not be based only on high efficiency. We propose a comparative criterion adapted to the comparison of a horizontal axis wind turbine with a vertical axis wind turbine: the L-{sigma} criterion. This criterion consists in comparing wind turbines which intercept the same front width of wind, by allocating them a same reference value of the maximal mechanical stress on the blades or the paddles. On the basis of this criterion, a quantitative comparison points to a clear advantage of the Savonius rotors, because of their lower angular velocity, and provides some elements for the improvement of their rotor. (Author)

  7. Multi-criteria site selection for fire services: the interaction with analytic hierarchy process and geographic information systems

    Directory of Open Access Journals (Sweden)

    T. Erden

    2010-10-01

    Full Text Available This study combines AHP and GIS to provide decision makers with a model to ensure optimal site location(s for fire stations selected. The roles of AHP and GIS in determining optimal locations are explained, criteria for site selection are outlined, and case study results for finding the optimal fire station locations in Istanbul, Turkey are included. The city of Istanbul has about 13 million residents and is the largest and most populated city in Turkey. The rapid and constant growth of Istanbul has resulted in the increased number of fire related cases. Fire incidents tend to increase year by year in parallel with city expansion, population and hazardous material facilities. Istanbul has seen a rise in reported fire incidents from 12 769 in 1994 to 30 089 in 2009 according to the interim report of Istanbul Metropolitan Municipality Department of Fire Brigade. The average response time was approximately 7 min 3 s in 2009. The goal of this study is to propose optimal sites for new fire station creation to allow the Fire Brigade in Istanbul to reduce the average response time to 5 min or less. After determining the necessity of suggesting additional fire stations, the following steps are taken into account: six criteria are considered in this analysis. They are: High Population Density (HPD; Proximity to Main Roads (PMR; Distance from Existing Fire Stations (DEF; Distance from Hazardous Material Facilities (DHM; Wooden Building Density (WBD; and Distance from the Areas Subjected to Earthquake Risk (DER. DHM criterion, with the weight of 40%, is the most important criterion in this analysis. The remaining criteria have a weight range from 9% to 16%. Moreover, the following steps are performed: representation of criterion map layers in GIS environment; classification of raster datasets; calculating the result raster map (suitability map for potential fire stations; and offering a model that supports decision makers in selecting fire station sites

  8. Stability Criterion for a Finned Spinning Projectile

    OpenAIRE

    S. D. Naik

    2000-01-01

    The state-of-the-art in gun projectile technology has been used for the aerodynamic stabilisation.This approach is acceptable for guided and controlled rockets but the free-flight rockets suffer fromunacceptable dispersion. Sabot projectiles with both spin and fms developed during the last decadeneed careful analysis. In this study, the second method of Liapunov has been used to develop stability criterion for a projectile to be designed with small fins and is made to spin in the flight. This...

  9. The Goiania accident: release from hospital criterion

    International Nuclear Information System (INIS)

    Falcao, R.C.; Hunt, J.

    1990-01-01

    On the thirteenth of September 1987, a 1357 Ci Cesium source was removed from the 'Instituto de Radiologia de Goiania' - probably two or three days later the source was opened, causing the internal and external contamination of 247 people, and part of the city of Goiania. This paper describes the release from hospital criterion of the contaminated patients, based on radiation protection principles which were developed for this case. The estimate of the biological half-life for cesium is also described. (author) [pt

  10. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  11. Traditional and alternative nonlinear models for estimating the growth of Morada Nova sheep

    Directory of Open Access Journals (Sweden)

    Laaina de Andrade Souza

    2013-09-01

    Full Text Available In the present study, alternative and traditional nonlinear models to describe growth curves of Morada Nova sheep reared in the state of Bahia, Brazil, were applied. The nonlinear models were: Schnute, Mitscherlich, Gompertz, Logistic, Meloun I Meloun II, III Meloun, Gamito and Meloun IV. The model adjustment was evaluated by using: Adjusted Coefficient of Determination (R²aj, Akaike Information Criterion (AIC, Bayesian Information Criterion (BIC, Mean Squared Error of Prediction (MEP and Coefficient of Determination of Prediction (R²p. The selection of the best model was based on cluster analysis, using the evaluators as variables. Six out of the nine tested models converged, while Meloun I and Meloun IV were equally effective in explaining animal growth, without significant influence of sex or type of parturition over the curve parameters. The models Meloun I and IV have the best adjustment and reveal a remarkable reduction of weight gain after 150 days of age, which indicates special attention should be given to feeding at this stage.

  12. The stressor criterion in DSM-IV posttraumatic stress disorder: an empirical investigation.

    Science.gov (United States)

    Breslau, N; Kessler, R C

    2001-11-01

    The DSM-IV two-part definition of posttraumatic stress disorder (PTSD) widened the variety of stressors (A1) and added a subjective component (A2). The effects of the revised stressor criterion on estimates of exposure and PTSD in a community sample are evaluated. A representative sample of 2181 persons in southeast Michigan were interviewed about lifetime history of traumatic events and PTSD. The evaluation of the revised two-part definition is based on a randomly selected sample of events that represents the total pool of traumatic events experienced in the community. The enlarged definition of stressors in A1 increased the total number of events that can be used to diagnose PTSD by 59%. The majority of A1 events (76.6%) involved the emotional response in A2. Females were more likely than males to endorse A2 (adjusted odds ratio = 2.66; 95% confidence interval 1.92, 3.71). Of all PTSD cases resulting from the representative sample of events, 38% were attributable to the expansion of qualifying events in A1. The identification of exposures that lead to PTSD were not improved materially by A2 however, events that did not involve A2 rarely resulted in PTSD. Compared to previous definitions, the wider variety of stressors in A1 markedly increased the number of events experienced in the community that can be used to diagnose PTSD. Furthermore, A2 might be useful as a separate criterion, an acute response necessary for the emergence of PTSD, and might serve as an early screen for identifying a subset of recently exposed persons at virtually no risk for PTSD. The utility of A2 as a screen must be tested prospectively.

  13. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Khalid Qaraqe

    2008-10-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  14. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Kim Jang-Sub

    2008-01-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  15. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  16. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  17. Signal Detection with Criterion Noise: Applications to Recognition Memory

    Science.gov (United States)

    Benjamin, Aaron S.; Diaz, Michael; Wee, Serena

    2009-01-01

    A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of…

  18. Valence electron structure of cast iron and graphltization behaviour criterion of elements

    Institute of Scientific and Technical Information of China (English)

    刘志林; 李志林; 孙振国; 杨晓平; 陈敏

    1995-01-01

    The valence electron structure of common alloy elements in phases of cast iron is calculated- The relationship between the electron structure of alloy elements and equilibrium, non-equilibrium solidification and graphitization is revealed by defining the bond energy of the strongest bond in a phase as structure formation factor S. A criterion of graphitization behaviour of elements is advanced with the critical value of the structure formation factor of graphite and the n of the strongest covalent bond in cementite. It is found that this theory conforms to practice very well when the criterion is applied to the common alloy elements.

  19. [X-ray evaluation of renal function in children with hydronephrosis as a criterion in the selection of therapeutic tactics].

    Science.gov (United States)

    Bosin, V Iu; Murvanidze, D D; Sturua, D G; Nabokov, A K; Soloshenko, V N

    1989-01-01

    The anatomic parameters of the kidneys and the rate of glomerular filtration were measured in 77 children with unilateral hydronephrosis and in 27 children with nonobstructive diseases of the urinary tract according to the clearance of an opaque medium during excretory urography. Alterations in the anatomic parameters of the kidneys in obstructive affection did not reflect the gravity of functional disorders. It has been established that there is a possibility of carrying out a separate assessment of filtration function of the hydronephrotic and contralateral kidneys. A new diagnostic criterion is offered, namely an index of relative clearance, which enables one to measure the degree of compensatory phenomena in the preserved glomeruli and the extent of sclerotic process. It has been demonstrated that accurate measurement of the functional parameters of the affected kidney should underlie the treatment choice in children with unilateral hydronephrosis.

  20. Prediction of fracture initiation in square cup drawing of DP980 using an anisotropic ductile fracture criterion

    Science.gov (United States)

    Park, N.; Huh, H.; Yoon, J. W.

    2017-09-01

    This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.

  1. Definition of the generalized criterion of estimation of ecological purity of textile products

    International Nuclear Information System (INIS)

    Gintibidze, N.; Valishvili, T.

    2009-01-01

    One of actual problems is the estimation of hygienic and ecological properties of fabrics on the basis of the data on the properties of initial fiber. In the present article, the definition of generalized criterion of the estimation of ecological purity of textile products is discussed. The estimation is based on the International Standard EKO-TEX-100, regulating the contents of inorganic and organic compounds in textile production. The determination of all listed substances is made according to appropriate techniques for each parameter. The quantity of substances is determined and compared with norms. The judgement about ecological purity is made by separate parameters. There is no uniform parameter which could estimate the degree of ecological purity of textile products. For calculating the generalized criterion of estimation of ecological purity of textile products, it is offered to estimate each criterion by the points corresponding to each factor. The textile product is recognized as ecologically pure (environment friendly) if the total estimate is more than 1. (author)

  2. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework.

    Science.gov (United States)

    Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas

    2016-01-01

    Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such

  3. Diel habitat selection of largemouth bass following woody structure installation in Table Rock Lake, Missouri

    Science.gov (United States)

    Harris, J.M.; Paukert, Craig P.; Bush, S.C.; Allen, M.J.; Siepker, Michael

    2018-01-01

    Largemouth bass Micropterus salmoides (Lacepède) use of installed habitat structure was evaluated in a large Midwestern USA reservoir to determine whether or not these structures were used in similar proportion to natural habitats. Seventy largemouth bass (>380 mm total length) were surgically implanted with radio transmitters and a subset was relocated monthly during day and night for one year. The top habitat selection models (based on Akaike's information criterion) suggest largemouth bass select 2–4 m depths during night and 4–7 m during day, whereas littoral structure selection was similar across diel periods. Largemouth bass selected boat docks at twice the rate of other structures. Installed woody structure was selected at similar rates to naturally occurring complex woody structure, whereas both were selected at a higher rate than simple woody structure. The results suggest the addition of woody structure may concentrate largemouth bass and mitigate the loss of woody habitat in a large reservoir.

  4. Direct and correlated responses to selection for total weight of lamb ...

    African Journals Online (AJOL)

    Unknown

    productivity and that each of these components can be used as a selection criterion, as each has a direct impact on total ewe ... of lamb weaned per ewe joined is more efficient than selection for number of lambs born, number of lambs weaned or weaning ... The estimated grazing capacity is 5.5 ha per small stock unit.

  5. Evaluation of the Barr & Stroud FP15 and Criterion 400 laser dendrometers for measuring upper stem diameters and heights

    Science.gov (United States)

    Michael S. Williams; Kenneth L. Cormier; Ronald G. Briggs; Donald L. Martinez

    1999-01-01

    Calibrated Barr & Stroud FP15 and Criterion 400 laser dendrometers were tested for reliability in measuring upper stem diameters and heights under typical field conditions. Data were collected in the Black Hills National Forest, which covers parts of South Dakota and Wyoming in the United States. Mixed effects models were employed to account for differences between...

  6. An Integrated model for Product Quality Development—A case study on Quality functions deployment and AHP based approach

    Science.gov (United States)

    Maitra, Subrata; Banerjee, Debamalya

    2010-10-01

    Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.

  7. THERMODYNAMIC DEPRESSION OF IONIZATION POTENTIALS IN NONIDEAL PLASMAS: GENERALIZED SELF-CONSISTENCY CRITERION AND A BACKWARD SCHEME FOR DERIVING THE EXCESS FREE ENERGY

    International Nuclear Information System (INIS)

    Zaghloul, Mofreh R.

    2009-01-01

    Accurate and consistent prediction of thermodynamic properties is of great importance in high-energy density physics and in modeling stellar atmospheres and interiors as well. Modern descriptions of thermodynamic properties of such nonideal plasma systems are sophisticated and/or full of pitfalls that make it difficult, if not impossible, to reproduce. The use of the Saha equation modified at high densities by incorporating simple expressions for depression of ionization potentials is very convenient in that context. However, as it is commonly known, the incorporation of ad hoc or empirical expressions for the depression of ionization potentials in the Saha equation leads to thermodynamic inconsistencies. The problem of thermodynamic consistency of ionization potentials depression in nonideal plasmas is investigated and a criterion is derived, which shows immediately, whether a particular model for the ionization potential depression is self-consistent, that is, whether it can be directly related to a modification of the free-energy function, or not. A backward scheme is introduced which can be utilized to derive nonideality corrections to the free-energy function from formulas of ionization potentials depression derived from plasma microfields or in ad hoc or empirical fashion provided that the aforementioned self-consistency criterion is satisfied. The value and usefulness of such a backward method are pointed out and discussed. The above-mentioned criterion is applied to investigate the thermodynamic consistency of some historic models in the literature and an optional routine is introduced to recover their thermodynamic consistencies while maintaining the same functional dependence on the species densities as in the original models. Sample computational problems showing the effect of the proposed modifications on the computed plasma composition are worked out and presented.

  8. A new criterion for defective used nuclear fuel in dry storage condition

    Energy Technology Data Exchange (ETDEWEB)

    Desgranges, Lionel [CEA Cadarache Bat 316 13108Saint-Paul lez Durance (France); Poulesquen, Arnaud; Ferry, Cecile [CEA Saclay Bat. 450 point courrier no 40 - 91191 Gif-sur-Yvette cedex (France)

    2008-07-01

    In the frame of the PRECCI program, the mechanisms associated to the oxidation of used nuclear fuel were studied. Experiments on un-irradiated UO{sub 2}, evidenced that the oxidation proceeded in three stages (namely UO{sub 2} {yields} U{sub 4}O{sub 9}, U{sub 4}O{sub 9} {yields} U{sub 3}O{sub 7} and U{sub 3}O{sub 7} {yields} U{sub 3}O{sub 8}) instead of 2 as previously proposed in literature. Experiments on irradiated fuel fragments evidenced the existence of an oxidation front inside the fuel fragment associated to some fission gas release. The degradation of a fuel rod slice was simulated in situ with CROCODILE experiment, and shown to be due to the fuel swelling. Finally a new criterion was proposed defining a safe duration for defective used fuel in a dry air facility in the case of an accident scenario where a breach in a container would put a defective fuel rod in contact with atmosphere. The criterion is related to the time needed to form a given thickness of U{sub 4}O{sub 9{gamma}}{sub +{psi}} with a higher stoichiometry than a given value, in irradiated grains. This U{sub 4}O{sub 9{gamma}}{sub +{psi}} layer thickness is assumed to simulate the onset of the ceramic fragmentation. This safety duration was calculated thanks to a new modeling of fuel fragment oxidation.

  9. Advance Pricing Agreements and the Selectivity Criterion in EU State Aid Rules

    OpenAIRE

    Härö, O

    2017-01-01

    The Commission of the EU has recently decided that Advance Pricing Agreement rulings (the APA rulings) that Ireland, Luxembourg and the Netherlands have granted for Apple, Fiat and Starbucks (respectively) constitute illegal State aid according to Article 107 of the Treaty on the Functioning of the European Union (TFEU). The Commission claims that the APA rulings deviate from the arm´s length principle and that they grant economic benefit for the beneficiary undertakings in a selective manner...

  10. Satisfying the Einstein–Podolsky–Rosen criterion with massive particles

    Science.gov (United States)

    Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.

    2015-01-01

    In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology. PMID:26612105

  11. Satisfying the Einstein-Podolsky-Rosen criterion with massive particles.

    Science.gov (United States)

    Peise, J; Kruse, I; Lange, K; Lücke, B; Pezzè, L; Arlt, J; Ertmer, W; Hammerer, K; Santos, L; Smerzi, A; Klempt, C

    2015-11-27

    In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.

  12. Criterion-Referenced Values of Grip Strength and Usual Gait Speed Using Instrumental Activities of Daily Living Disability as the Criterion.

    Science.gov (United States)

    Lee, Meng-Chih; Hsu, Chih-Cheng; Tsai, Yi-Fen; Chen, Ching-Yu; Lin, Cheng-Chieh; Wang, Ching-Yi

    Current evidence suggests that grip strength and usual gait speed (UGS) are important predictors of instrumental activities of daily living (IADL) disability. Knowing the optimum cut points of these tests for discriminating people with and without IADL disability could help clinicians or researchers to better interpret the test results and make medical decisions. The purpose of this study was to determine the cutoff values of grip strength and UGS for best discriminating community-dwelling older adults with and without IADL disability, separately for men and women, and to investigate their association with IADL disability. We conducted secondary data analysis on a national dataset collected in the Sarcopenia and Translational Aging Research in Taiwan (START). The data used in this study consisted of health data of 2420 community-dwelling older adults 65 years and older with no history of stroke and with complete data. IADL disability was defined as at least 1 IADL item scored as "need help" or "unable to perform." Receiver operating characteristics analysis was used to estimate the optimum grip strength and UGS cut points for best discriminating older adults with/without IADL disability. The association between each physical performance (grip strength and UGS) and IADL disability was assessed with odds ratios (ORs). With IADL disability as the criterion, the optimal cutoff values of grip strength were 28.7 kg for men and 16.0 kg for women, and those for UGS were 0.76 m/s for men and 0.66 m/s for women. The grip strength test showed satisfactory discriminant validity (area under the curve > 0.7) in men and a strong association with IADL disability (OR > 4). Our cut points using IADL disability as the criterion were close to those indicating frailty or sarcopenia. Our reported cutoffs can serve as criterion-referenced values, along with those previously determined using different indicators, and provide important landmarks on the performance continua of older adults

  13. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  14. Rayleigh Number Criterion for Formation of A-Segregates in Steel Castings and Ingots

    DEFF Research Database (Denmark)

    Rad, M. Torabi; Kotas, Petr; Beckermann, C.

    2013-01-01

    A Rayleigh number-based criterion is developed for predicting the formation of A-segregates in steel castings and ingots. The criterion is calibrated using available experimental data for ingots involving 27 different steel compositions. The critical Rayleigh number above which A-segregates can b......, the primary reason for this over-prediction is persumed to be the presence of a central zone of equiaxed grains in the casting sections. A-segregates do not form when the grain structure is equiaxed. © The Minerals, Metals & Materials Society and ASM International 2013...

  15. Exploring DSM-5 criterion A in Acute Stress Disorder symptoms following natural disaster.

    Science.gov (United States)

    Lavenda, Osnat; Grossman, Ephraim S; Ben-Ezra, Menachem; Hoffman, Yaakov

    2017-10-01

    The present study examines the DSM-5 Acute Stress Disorder (ASD) diagnostic criteria of exposure, in the context of a natural disaster. The study is based on the reports of 1001 Filipinos following the aftermath of super typhoon Haiyan in 2013. Participants reported exposure to injury, psychological distress and ASD symptoms. Findings indicated the association of criterion A with the prevalence of meeting all other ASD diagnostic criteria and high psychological distress. The diagnostic properties of Criterion A are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A phenomenological prediction of dryout based on the churn-to-annular flow transition criterion in uniformly heated vertical tubes

    International Nuclear Information System (INIS)

    Hong, Sung-Deok; Chun, Se-Young; Yang, Sun-Kyu; Chung, Moon-Ki; Lashgari, Farbod

    2000-01-01

    A phenomenological model is proposed to predict dryout in uniformly heated vertical tubes. The major point of the study was refining the initial conditions at the onset of annular flow location that starts the liquid film dryout process. The void fraction at the onset of the annular flow location has been derived from the vapor superficial velocity obtained by the churn-to-annular flow criterion with the help of the void-quality relationship. The thermodynamic equilibrium quality calculated through the iteration of flow quality using the profile-fit model to find the accurate starting point of the annular-flow in a tube. The present method was validated by worldwide data covering wide parametric ranges, a diameter of 5.1-37.5, exit quality over 10%, a flow rate of 183-5261 kg/m 2 -s and a system pressure of 0.5-17.7 MPa. The churn-to-annular flow transition criterion of Taitel et al.'s shows better prediction results than the other transition criteria. The present model improved the CHF prediction capability as a mean of 0.97 and root mean square error of 11% for the 3883 experimental data and extended the applicable range to the relatively low quality region. (author)

  17. A Business Process Management System based on a General Optimium Criterion

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2009-01-01

    Full Text Available Business Process Management Systems (BPMS provide a broadrange of facilities to manage operational business processes. These systemsshould provide support for the complete Business Process Management (BPMlife-cycle [16]: (redesign, configuration, execution, control, and diagnosis ofprocesses. BPMS can be seen as successors of Workflow Management (WFMsystems. However, already in the seventies people were working on officeautomation systems which are comparable with today’s WFM systems.Recently, WFM vendors started to position their systems as BPMS. Our paper’sgoal is a proposal for a Tasks-to-Workstations Assignment Algorithm (TWAAfor assembly lines which is a special implementation of a stochastic descenttechnique, in the context of BPMS, especially at the control level. Both cases,single and mixed-model, are treated. For a family of product models having thesame generic structure, the mixed-model assignment problem can be formulatedthrough an equivalent single-model problem. A general optimum criterion isconsidered. As the assembly line balancing, this kind of optimisation problemleads to a graph partitioning problem meeting precedence and feasibilityconstraints. The proposed definition for the "neighbourhood" function involvesan efficient way for treating the partition and precedence constraints. Moreover,the Stochastic Descent Technique (SDT allows an implicit treatment of thefeasibility constraint. The proposed algorithm converges with probability 1 toan optimal solution.

  18. Early Stop Criterion from the Bootstrap Ensemble

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan; Fog, Torben L.

    1997-01-01

    This paper addresses the problem of generalization error estimation in neural networks. A new early stop criterion based on a Bootstrap estimate of the generalization error is suggested. The estimate does not require the network to be trained to the minimum of the cost function, as required...... by other methods based on asymptotic theory. Moreover, in contrast to methods based on cross-validation which require data left out for testing, and thus biasing the estimate, the Bootstrap technique does not have this disadvantage. The potential of the suggested technique is demonstrated on various time...

  19. Instance selection in digital soil mapping: a study case in Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Elvio Giasson

    2015-09-01

    Full Text Available A critical issue in digital soil mapping (DSM is the selection of data sampling method for model training. One emerging approach applies instance selection to reduce the size of the dataset by drawing only relevant samples in order to obtain a representative subset that is still large enough to preserve relevant information, but small enough to be easily handled by learning algorithms. Although there are suggestions to distribute data sampling as a function of the soil map unit (MU boundaries location, there are still contradictions among research recommendations for locating samples either closer or more distant from soil MU boundaries. A study was conducted to evaluate instance selection methods based on spatially-explicit data collection using location in relation to soil MU boundaries as the main criterion. Decision tree analysis was performed for modeling digital soil class mapping using two different sampling schemes: a selecting sampling points located outside buffers near soil MU boundaries, and b selecting sampling points located within buffers near soil MU boundaries. Data was prepared for generating classification trees to include only data points located within or outside buffers with widths of 60, 120, 240, 360, 480, and 600m near MU boundaries. Instance selection methods using both spatial selection of methods was effective for reduced size of the dataset used for calibrating classification tree models, but failed to provide advantages to digital soil mapping because of potential reduction in the accuracy of classification tree models.

  20. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  1. Basis note in behalf of the development of a test criterion for the storage of radioactive waste

    International Nuclear Information System (INIS)

    1987-09-01

    Aspects are described which may play a role in the development of a criterion for a final storage of radioactive waste. Such a criterion consists of a set of requirements concerning the quality of environment and public health, with which a proposal for the final storage of the waste has to be compared. The criterion should be able to give decisive answers regarding the acceptibility of a storage but should also be useful for the judgment of risk-analyses which are therefore necessary. 10 refs.; 9 figs.; 3 tabs

  2. The precautionary principle as a rational decision criterion; Foere var-prinsippet som rasjonelt beslutningsgrunnlag

    Energy Technology Data Exchange (ETDEWEB)

    Hovi, Jon

    2001-12-01

    The paper asks if the precautionary principle may be seen as a rational decision criterion. Six main questions are discussed. 1. Does the principle basically represent a particular set of political options or is it a genuine decision criterion? 2. If it is the latter, can it be reduced to any of the existing criteria for decision making under uncertainty? 3. In what kinds of situation is the principle applicable? 4. What is the relation between the precautionary principle and other principles for environmental regulation? 5. How plausible is the principle's claim that the burden of proof should be reversed? 6. Do the proponents of environmental regulation carry no burden of proof at all? A main conclusion is that, for now at least, the principle contains too many unclear elements to satisfy the requirements of precision and consistency that should reasonably be satisfied by a rational decision criterion. (author)

  3. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  4. MID-INFRARED SELECTION OF ACTIVE GALACTIC NUCLEI WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER. I. CHARACTERIZING WISE-SELECTED ACTIVE GALACTIC NUCLEI IN COSMOS

    International Nuclear Information System (INIS)

    Stern, Daniel; Assef, Roberto J.; Eisenhardt, Peter; Benford, Dominic J.; Blain, Andrew; Cutri, Roc; Griffith, Roger L.; Jarrett, T. H.; Masci, Frank; Tsai, Chao-Wei; Yan, Lin; Dey, Arjun; Lake, Sean; Petty, Sara; Wright, E. L.; Stanford, S. A.; Harrison, Fiona; Madsen, Kristin

    2012-01-01

    The Wide-field Infrared Survey Explorer (WISE) is an extremely capable and efficient black hole finder. We present a simple mid-infrared color criterion, W1 – W2 ≥ 0.8 (i.e., [3.4]–[4.6] ≥0.8, Vega), which identifies 61.9 ± 5.4 active galactic nucleus (AGN) candidates per deg 2 to a depth of W2 ∼ 15.0. This implies a much larger census of luminous AGNs than found by typical wide-area surveys, attributable to the fact that mid-infrared selection identifies both unobscured (type 1) and obscured (type 2) AGNs. Optical and soft X-ray surveys alone are highly biased toward only unobscured AGNs, while this simple WISE selection likely identifies even heavily obscured, Compton-thick AGNs. Using deep, public data in the COSMOS field, we explore the properties of WISE-selected AGN candidates. At the mid-infrared depth considered, 160 μJy at 4.6 μm, this simple criterion identifies 78% of Spitzer mid-infrared AGN candidates according to the criteria of Stern et al. and the reliability is 95%. We explore the demographics, multiwavelength properties and redshift distribution of WISE-selected AGN candidates in the COSMOS field.

  5. Criterion for vortex breakdown on shock wave and streamwise vortex interactions.

    Science.gov (United States)

    Hiejima, Toshihiko

    2014-05-01

    The interactions between supersonic streamwise vortices and oblique shock waves are theoretically and numerically investigated by three-dimensional (3D) Navier-Stokes equations. Based on the two inequalities, a criterion for shock-induced breakdown of the streamwise vortex is proposed. The simple breakdown condition depends on the Mach number, the swirl number, the velocity deficit, and the shock angle. According to the proposed criterion, the breakdown region expands as the Mach number increases. In numerical simulations, vortex breakdown appeared under conditions of multiple pressure increases and the helicity disappeared behind the oblique shock wave along the line of the vortex center. The numerical results are consistent with the predicted breakdown condition at Mach numbers 2.0 and 3.0. This study also found that the axial velocity deficit is important for classifying the breakdown configuration.

  6. An adapted yield criterion for the evolution of subsequent yield surfaces

    Science.gov (United States)

    Küsters, N.; Brosius, A.

    2017-09-01

    In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.

  7. Study of Post-Peak Strain Softening Mechanical Behaviour of Rock Material Based on Hoek–Brown Criterion

    OpenAIRE

    Qibin Lin; Ping Cao; Peixin Wang

    2018-01-01

    In order to build the post-peak strain softening model of rock, the evolution laws of rock parameters m,s were obtained by using the evolutionary mode of piecewise linear function regarding the maximum principle stress. Based on the nonlinear Hoek–Brown criterion, the analytical relationship of the rock strength parameters m,s, cohesion c, and friction angle φ has been developed by theoretical derivation. According to the analysis on the four different types of rock, it is found that, within ...

  8. Site selection and evaluation of nuclear power units in Egypt

    International Nuclear Information System (INIS)

    Bonnefille, R.

    1980-01-01

    The selection of sites for nuclear power units in Egypt by SOFRATOME for Nuclear Plants Authority is carried on using a method based on interaction between different criteria. The method and the main results on criterion 'radio-ecological impact' are sketched briefly [fr

  9. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  10. A practical criterion of irreducibility of multi-loop Feynman integrals

    International Nuclear Information System (INIS)

    Baikov, P.A.

    2006-01-01

    A practical criterion for the irreducibility (with respect to integration by part identities) of a particular Feynman integral to a given set of integrals is presented. The irreducibility is shown to be related to the existence of stable (with zero gradient) points of a specially constructed polynomial

  11. Distinct anatomical correlates of discriminability and criterion setting in verbal recognition memory revealed by lesion-symptom mapping

    NARCIS (Netherlands)

    Biesbroek, J Matthijs; van Zandvoort, Martine J E; Kappelle, L Jaap; Schoo, Linda; Kuijf, Hugo J; Velthuis, Birgitta K; Biessels, Geert Jan; Postma, Albert

    2014-01-01

    Recognition memory, that is, the ability to judge whether an item has been previously encountered in a particular context, depends on two factors: discriminability and criterion setting. Discriminability draws on memory processes while criterion setting (i.e., the application of a threshold

  12. Distinct anatomical correlates of discriminability and criterion setting in verbal recognition memory revealed by lesion-symptom mapping

    NARCIS (Netherlands)

    Biesbroek, J. Matthijs; van Zandvoort, Martine J E; Kappelle, L. Jaap; Schoo, Linda; Kuijf, Hugo J.; Velthuis, BK; Biessels, Geert Jan; Postma, Albert

    2015-01-01

    Recognition memory, that is, the ability to judge whether an item has been previously encountered in a particular context, depends on two factors: discriminability and criterion setting. Discriminability draws on memory processes while criterion setting (i.e., the application of a threshold

  13. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  14. Optimizing phonon space in the phonon-coupling model

    Science.gov (United States)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2017-08-01

    We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.

  15. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  16. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  17. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  18. An adaptive pruning algorithm for the discrete L-curve criterion

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg; Rodriguez, Giuseppe

    2007-01-01

    We describe a robust and adaptive implementation of the L-curve criterion, i.e., for locating the corner of a discrete L-curve consisting of a log-log plot of corresponding residual and solution norms of regularized solutions from a method with a discrete regularization parameter (such as truncated...

  19. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  20. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.