WorldWideScience

Sample records for models permutation methods

  1. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  2. A chronicle of permutation statistical methods 1920–2000, and beyond

    CERN Document Server

    Berry, Kenneth J; Mielke Jr , Paul W

    2014-01-01

    The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...

  3. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  4. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  5. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  6. Permutations

    International Nuclear Information System (INIS)

    Arnold, Vladimir I

    2009-01-01

    Decompositions into cycles for random permutations of a large number of elements are very different (in their statistics) from the same decompositions for algebraic permutations (defined by linear or projective transformations of finite sets). This paper presents tables giving both these and other statistics, as well as a comparison of them with the statistics of involutions or permutations with all their cycles of even length. The inclusions of a point in cycles of various lengths turn out to be equiprobable events for random permutations. The number of permutations of 2N elements with all cycles of even length turns out to be the square of an integer (namely, of (2N-1)!!). The number of cycles of projective permutations (over a field with an odd prime number of elements) is always even. These and other empirically discovered theorems are proved in the paper. Bibliography: 6 titles.

  7. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  8. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...

  9. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  10. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  11. Tensor models, Kronecker coefficients and permutation centralizer algebras

    Science.gov (United States)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  12. PERMutation Using Transposase Engineering (PERMUTE): A Simple Approach for Constructing Circularly Permuted Protein Libraries.

    Science.gov (United States)

    Jones, Alicia M; Atkinson, Joshua T; Silberg, Jonathan J

    2017-01-01

    Rearrangements that alter the order of a protein's sequence are used in the lab to study protein folding, improve activity, and build molecular switches. One of the simplest ways to rearrange a protein sequence is through random circular permutation, where native protein termini are linked together and new termini are created elsewhere through random backbone fission. Transposase mutagenesis has emerged as a simple way to generate libraries encoding different circularly permuted variants of proteins. With this approach, a synthetic transposon (called a permuteposon) is randomly inserted throughout a circularized gene to generate vectors that express different permuted variants of a protein. In this chapter, we outline the protocol for constructing combinatorial libraries of circularly permuted proteins using transposase mutagenesis, and we describe the different permuteposons that have been developed to facilitate library construction.

  13. Invalid Permutation Tests

    Directory of Open Access Journals (Sweden)

    Mikel Aickin

    2010-01-01

    Full Text Available Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

  14. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  15. A transposase strategy for creating libraries of circularly permuted proteins.

    Science.gov (United States)

    Mehta, Manan M; Liu, Shirley; Silberg, Jonathan J

    2012-05-01

    A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions.

  16. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  17. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    Science.gov (United States)

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  18. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    Science.gov (United States)

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  19. Permutation importance: a corrected feature importance measure.

    Science.gov (United States)

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  20. Infinite permutations vs. infinite words

    Directory of Open Access Journals (Sweden)

    Anna E. Frid

    2011-08-01

    Full Text Available I am going to compare well-known properties of infinite words with those of infinite permutations, a new object studied since middle 2000s. Basically, it was Sergey Avgustinovich who invented this notion, although in an early study by Davis et al. permutations appear in a very similar framework as early as in 1977. I am going to tell about periodicity of permutations, their complexity according to several definitions and their automatic properties, that is, about usual parameters of words, now extended to permutations and behaving sometimes similarly to those for words, sometimes not. Another series of results concerns permutations generated by infinite words and their properties. Although this direction of research is young, many people, including two other speakers of this meeting, have participated in it, and I believe that several more topics for further study are really promising.

  1. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    Science.gov (United States)

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  3. Permutation orbifolds and chaos

    NARCIS (Netherlands)

    Belin, A.

    2017-01-01

    We study out-of-time-ordered correlation functions in permutation orbifolds at large central charge. We show that they do not decay at late times for arbitrary choices of low-dimension operators, indicating that permutation orbifolds are non-chaotic theories. This is in agreement with the fact they

  4. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    Science.gov (United States)

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  6. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  7. Sorting permutations by prefix and suffix rearrangements.

    Science.gov (United States)

    Lintzmayer, Carla Negri; Fertin, Guillaume; Dias, Zanoni

    2017-02-01

    Some interesting combinatorial problems have been motivated by genome rearrangements, which are mutations that affect large portions of a genome. When we represent genomes as permutations, the goal is to transform a given permutation into the identity permutation with the minimum number of rearrangements. When they affect segments from the beginning (respectively end) of the permutation, they are called prefix (respectively suffix) rearrangements. This paper presents results for rearrangement problems that involve prefix and suffix versions of reversals and transpositions considering unsigned and signed permutations. We give 2-approximation and ([Formula: see text])-approximation algorithms for these problems, where [Formula: see text] is a constant divided by the number of breakpoints (pairs of consecutive elements that should not be consecutive in the identity permutation) in the input permutation. We also give bounds for the diameters concerning these problems and provide ways of improving the practical results of our algorithms.

  8. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  9. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  10. An AUC-based permutation variable importance measure for random forests.

    Science.gov (United States)

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  11. The coupling analysis between stock market indices based on permutation measures

    Science.gov (United States)

    Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung

    2016-04-01

    Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  12. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  13. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  14. A studentized permutation test for three-arm trials in the 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Konietschke, Frank; Munk, Axel; Friede, Tim

    2017-03-15

    The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. 1-Colored Archetypal Permutations and Strings of Degree n

    Directory of Open Access Journals (Sweden)

    Gheorghe Eduard Tara

    2012-10-01

    Full Text Available New notions related to permutations are introduced here. We present the string of a 1-colored permutation as a closed planar curve, the fundamental 1-colored permutation as an equivalence class related to the equivalence in strings of the 1-colored permutations. We give formulas for the number of the 1-colored archetypal permutations of degree n. We establish an algorithm to identify the 1- colored archetypal permutations of degree n and we present the atlas of the 1-colored archetypal strings of degree n, n ≤ 7, based on this algorithm.

  16. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  17. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    Science.gov (United States)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  18. Tensor Permutation Matrices in Finite Dimensions

    OpenAIRE

    Christian, Rakotonirina

    2005-01-01

    We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...

  19. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    ... in a combinatorial sense the distribution of the ranks is obtained via its generating function. The formulas are defined recursively to speed up computations using the computer algebra system Mathematica. Key words: Partitions, generating functions, combinatorics, permutation test, exact tests, computer algebra, k-sample, ...

  20. Sorting signed permutations by short operations.

    Science.gov (United States)

    Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni

    2015-01-01

    During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show

  1. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  2. Secure physical layer using dynamic permutations in cognitive OFDMA systems

    DEFF Research Database (Denmark)

    Meucci, F.; Wardana, Satya Ardhy; Prasad, Neeli R.

    2009-01-01

    This paper proposes a novel lightweight mechanism for a secure Physical (PHY) layer in Cognitive Radio Network (CRN) using Orthogonal Frequency Division Multiplexing (OFDM). User's data symbols are mapped over the physical subcarriers with a permutation formula. The PHY layer is secured...... with a random and dynamic subcarrier permutation which is based on a single pre-shared information and depends on Dynamic Spectrum Access (DSA). The dynamic subcarrier permutation is varying over time, geographical location and environment status, resulting in a very robust protection that ensures...... confidentiality. The method is shown to be effective also for existing non-cognitive systems. The proposed mechanism is effective against eavesdropping even if the eavesdropper adopts a long-time patterns analysis, thus protecting cryptography techniques of higher layers. The correlation properties...

  3. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  4. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  5. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  6. Weighted multiscale Rényi permutation entropy of nonlinear time series

    Science.gov (United States)

    Chen, Shijian; Shang, Pengjian; Wu, Yue

    2018-04-01

    In this paper, based on Rényi permutation entropy (RPE), which has been recently suggested as a relative measure of complexity in nonlinear systems, we propose multiscale Rényi permutation entropy (MRPE) and weighted multiscale Rényi permutation entropy (WMRPE) to quantify the complexity of nonlinear time series over multiple time scales. First, we apply MPRE and WMPRE to the synthetic data and make a comparison of modified methods and RPE. Meanwhile, the influence of the change of parameters is discussed. Besides, we interpret the necessity of considering not only multiscale but also weight by taking the amplitude into account. Then MRPE and WMRPE methods are employed to the closing prices of financial stock markets from different areas. By observing the curves of WMRPE and analyzing the common statistics, stock markets are divided into 4 groups: (1) DJI, S&P500, and HSI, (2) NASDAQ and FTSE100, (3) DAX40 and CAC40, and (4) ShangZheng and ShenCheng. Results show that the standard deviations of weighted methods are smaller, showing WMRPE is able to ensure the results more robust. Besides, WMPRE can provide abundant dynamical properties of complex systems, and demonstrate the intrinsic mechanism.

  7. Defects and permutation branes in the Liouville field theory

    DEFF Research Database (Denmark)

    Sarkissian, Gor

    2009-01-01

    The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families.......The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families....

  8. EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Junjie; Wu, Xin; Huang, Guoqing [Department of Physics and Institute of Astronomy, Nanchang University, Nanchang 330031 (China); Liu, Fuyao, E-mail: xwu@ncu.edu.cn [School of Fundamental Studies, Shanghai University of Engineering Science, Shanghai 201620 (China)

    2017-01-01

    We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step than the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.

  9. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  10. Permutation parity machines for neural cryptography.

    Science.gov (United States)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  11. Permutation parity machines for neural cryptography

    International Nuclear Information System (INIS)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-01-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  12. Permutations of massive vacua

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine [Department of Physics, Universidad de Oviedo, Avenida Calvo Sotelo 18, 33007 Oviedo (Spain); Troost, Jan [Laboratoire de Physique Théorique de l’É cole Normale Supérieure, CNRS,PSL Research University, Sorbonne Universités, 75005 Paris (France)

    2017-05-09

    We discuss the permutation group G of massive vacua of four-dimensional gauge theories with N=1 supersymmetry that arises upon tracing loops in the space of couplings. We concentrate on superconformal N=4 and N=2 theories with N=1 supersymmetry preserving mass deformations. The permutation group G of massive vacua is the Galois group of characteristic polynomials for the vacuum expectation values of chiral observables. We provide various techniques to effectively compute characteristic polynomials in given theories, and we deduce the existence of varying symmetry breaking patterns of the duality group depending on the gauge algebra and matter content of the theory. Our examples give rise to interesting field extensions of spaces of modular forms.

  13. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Kaixuan, E-mail: kaixuanxubjtu@yeah.net; Wang, Jun

    2017-02-26

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  14. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    International Nuclear Information System (INIS)

    Xu, Kaixuan; Wang, Jun

    2017-01-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  15. A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro

    2016-01-01

    Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation

  16. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  17. Permutations avoiding an increasing number of length-increasing forbidden subsequences

    Directory of Open Access Journals (Sweden)

    Elena Barcucci

    2000-12-01

    Full Text Available A permutation π is said to be τ-avoiding if it does not contain any subsequence having all the same pairwise comparisons as τ. This paper concerns the characterization and enumeration of permutations which avoid a set F j of subsequences increasing both in number and in length at the same time. Let F j be the set of subsequences of the form σ(j+1(j+2, σ being any permutation on {1,...,j}. For j=1 the only subsequence in F 1 is 123 and the 123-avoiding permutations are enumerated by the Catalan numbers; for j=2 the subsequences in F 2 are 1234 2134 and the (1234,2134 avoiding permutations are enumerated by the Schröder numbers; for each other value of j greater than 2 the subsequences in F j are j! and their length is (j+2 the permutations avoiding these j! subsequences are enumerated by a number sequence {a n } such that C n ≤ a n ≤ n!, C n being the n th Catalan number. For each j we determine the generating function of permutations avoiding the subsequences in F j according to the length, to the number of left minima and of non-inversions.

  18. Patterns in Permutations and Words

    CERN Document Server

    Kitaev, Sergey

    2011-01-01

    There has been considerable interest recently in the subject of patterns in permutations and words, a new branch of combinatorics with its roots in the works of Rotem, Rogers, and Knuth in the 1970s. Consideration of the patterns in question has been extremely interesting from the combinatorial point of view, and it has proved to be a useful language in a variety of seemingly unrelated problems, including the theory of Kazhdan--Lusztig polynomials, singularities of Schubert varieties, interval orders, Chebyshev polynomials, models in statistical mechanics, and various sorting algorithms, inclu

  19. Rolling Bearing Fault Diagnosis Based on ELCD Permutation Entropy and RVM

    Directory of Open Access Journals (Sweden)

    Jiang Xingmeng

    2016-01-01

    Full Text Available Aiming at the nonstationary characteristic of a gear fault vibration signal, a recognition method based on permutation entropy of ensemble local characteristic-scale decomposition (ELCD and relevance vector machine (RVM is proposed. First, the vibration signal was decomposed by ELCD; then a series of intrinsic scale components (ISCs were obtained. Second, according to the kurtosis of ISCs, principal ISCs were selected and then the permutation entropy of principal ISCs was calculated and they were combined into a feature vector. Finally, the feature vectors were input in RVM classifier to train and test and identify the type of rolling bearing faults. Experimental results show that this method can effectively diagnose four kinds of working condition, and the effect is better than local characteristic-scale decomposition (LCD method.

  20. Some topics on permutable subgroups in infinite groups

    OpenAIRE

    Ialenti, Roberto

    2017-01-01

    The aim of this thesis is to study permutability in different aspects of the theory of infinite groups. In particular, it will be studied the structure of groups in which all the members of a relevant system of subgroups satisfy a suitable generalized condition of permutability.

  1. Development of isothermal-isobaric replica-permutation method for molecular dynamics and Monte Carlo simulations and its application to reveal temperature and pressure dependence of folded, misfolded, and unfolded states of chignolin

    Science.gov (United States)

    Yamauchi, Masataka; Okumura, Hisashi

    2017-11-01

    We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.

  2. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  3. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  4. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  5. Rank-based permutation approaches for non-parametric factorial designs.

    Science.gov (United States)

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  6. Circular Permutation of a Chaperonin Protein: Biophysics and Application to Nanotechnology

    Science.gov (United States)

    Paavola, Chad; Chan, Suzanne; Li, Yi-Fen; McMillan, R. Andrew; Trent, Jonathan

    2004-01-01

    We have designed five circular permutants of a chaperonin protein derived from the hyperthermophilic organism Sulfolobus shibatae. These permuted proteins were expressed in E. coli and are well-folded. Furthermore, all the permutants assemble into 18-mer double rings of the same form as the wild-type protein. We characterized the thermodynamics of folding for each permutant by both guanidine denaturation and differential scanning calorimetry. We also examined the assembly of chaperonin rings into higher order structures that may be used as nanoscale templates. The results show that circular permutation can be used to tune the thermodynamic properties of a protein template as well as facilitating the fusion of peptides, binding proteins or enzymes onto nanostructured templates.

  7. The magic of universal quantum computing with permutations

    OpenAIRE

    Planat, Michel; Rukhsan-Ul-Haq

    2017-01-01

    The role of permutation gates for universal quantum computing is investigated. The \\lq magic' of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function and state-dependent contextuality (following many contributions on this subject). A first classification of main types of resulting magic states in low dimensions $d \\le 9$ is performed.

  8. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  9. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  10. Permutation orbifolds

    International Nuclear Information System (INIS)

    Bantay, P.

    2002-01-01

    A general theory of permutation orbifolds is developed for arbitrary twist groups. Explicit expressions for the number of primaries, the partition function, the genus one characters, the matrix elements of modular transformations and for fusion rule coefficients are presented, together with the relevant mathematical concepts, such as Λ-matrices and twisted dimensions. The arithmetic restrictions implied by the theory for the allowed modular representations in CFT are discussed. The simplest nonabelian example with twist group S 3 is described to illustrate the general theory

  11. A permutations representation that knows what " Eulerian" means

    Directory of Open Access Journals (Sweden)

    Roberto Mantaci

    2001-12-01

    Full Text Available Eulerian numbers (and ``Alternate Eulerian numbers'' are often interpreted as distributions of statistics defined over the Symmetric group. The main purpose of this paper is to define a way to represent permutations that provides some other combinatorial interpretations of these numbers. This representation uses a one-to-one correspondence between permutations and the so-called subexceedant functions.

  12. The Magic of Universal Quantum Computing with Permutations

    Directory of Open Access Journals (Sweden)

    Michel Planat

    2017-01-01

    Full Text Available The role of permutation gates for universal quantum computing is investigated. The “magic” of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function, and state-dependent contextuality (following many contributions on this subject. A first classification of a few types of resulting magic states in low dimensions d≤9 is performed.

  13. Young module multiplicities and classifying the indecomposable Young permutation modules

    OpenAIRE

    Gill, Christopher C.

    2012-01-01

    We study the multiplicities of Young modules as direct summands of permutation modules on cosets of Young subgroups. Such multiplicities have become known as the p-Kostka numbers. We classify the indecomposable Young permutation modules, and, applying the Brauer construction for p-permutation modules, we give some new reductions for p-Kostka numbers. In particular we prove that p-Kostka numbers are preserved under multiplying partitions by p, and strengthen a known reduction given by Henke, c...

  14. The Structure of a Thermophilic Kinase Shapes Fitness upon Random Circular Permutation.

    Science.gov (United States)

    Jones, Alicia M; Mehta, Manan M; Thomas, Emily E; Atkinson, Joshua T; Segall-Shapiro, Thomas H; Liu, Shirley; Silberg, Jonathan J

    2016-05-20

    Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement in which native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein's functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AKs with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and it reveals a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection.

  15. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    Science.gov (United States)

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  16. Permutation groups

    CERN Document Server

    Passman, Donald S

    2012-01-01

    This volume by a prominent authority on permutation groups consists of lecture notes that provide a self-contained account of distinct classification theorems. A ready source of frequently quoted but usually inaccessible theorems, it is ideally suited for professional group theorists as well as students with a solid background in modern algebra.The three-part treatment begins with an introductory chapter and advances to an economical development of the tools of basic group theory, including group extensions, transfer theorems, and group representations and characters. The final chapter feature

  17. EPEPT: A web service for enhanced P-value estimation in permutation tests

    Directory of Open Access Journals (Sweden)

    Knijnenburg Theo A

    2011-10-01

    Full Text Available Abstract Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values 1. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/

  18. SCOPES: steganography with compression using permutation search

    Science.gov (United States)

    Boorboor, Sahar; Zolfaghari, Behrouz; Mozafari, Saadat Pour

    2011-10-01

    LSB (Least Significant Bit) is a widely used method for image steganography, which hides the secret message as a bit stream in LSBs of pixel bytes in the cover image. This paper proposes a variant of LSB named SCOPES that encodes and compresses the secret message while being hidden through storing addresses instead of message bytes. Reducing the length of the stored message improves the storage capacity and makes the stego image visually less suspicious to the third party. The main idea behind the SCOPES approach is dividing the message into 3-character segments, seeking each segment in the cover image and storing the address of the position containing the segment instead of the segment itself. In this approach, every permutation of the 3 bytes (if found) can be stored along with some extra bits indicating the permutation. In some rare cases the segment may not be found in the image and this can cause the message to be expanded by some overhead bits2 instead of being compressed. But experimental results show that SCOPES performs overlay better than traditional LSB even in the worst cases.

  19. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  20. A Fast Algorithm for Generating Permutation Distribution of Ranks in ...

    African Journals Online (AJOL)

    ... function of the distribution of the ranks. This further gives insight into the permutation distribution of a rank statistics. The algorithm is implemented with the aid of the computer algebra system Mathematica. Key words: Combinatorics, generating function, permutation distribution, rank statistics, partitions, computer algebra.

  1. Quantile-based permutation thresholds for quantitative trait loci hotspots.

    Science.gov (United States)

    Neto, Elias Chaibub; Keller, Mark P; Broman, Andrew F; Attie, Alan D; Jansen, Ritsert C; Broman, Karl W; Yandell, Brian S

    2012-08-01

    Quantitative trait loci (QTL) hotspots (genomic locations affecting many traits) are a common feature in genetical genomics studies and are biologically interesting since they may harbor critical regulators. Therefore, statistical procedures to assess the significance of hotspots are of key importance. One approach, randomly allocating observed QTL across the genomic locations separately by trait, implicitly assumes all traits are uncorrelated. Recently, an empirical test for QTL hotspots was proposed on the basis of the number of traits that exceed a predetermined LOD value, such as the standard permutation LOD threshold. The permutation null distribution of the maximum number of traits across all genomic locations preserves the correlation structure among the phenotypes, avoiding the detection of spurious hotspots due to nongenetic correlation induced by uncontrolled environmental factors and unmeasured variables. However, by considering only the number of traits above a threshold, without accounting for the magnitude of the LOD scores, relevant information is lost. In particular, biologically interesting hotspots composed of a moderate to small number of traits with strong LOD scores may be neglected as nonsignificant. In this article we propose a quantile-based permutation approach that simultaneously accounts for the number and the LOD scores of traits within the hotspots. By considering a sliding scale of mapping thresholds, our method can assess the statistical significance of both small and large hotspots. Although the proposed approach can be applied to any type of heritable high-volume "omic" data set, we restrict our attention to expression (e)QTL analysis. We assess and compare the performances of these three methods in simulations and we illustrate how our approach can effectively assess the significance of moderate and small hotspots with strong LOD scores in a yeast expression data set.

  2. Permutation entropy and statistical complexity in characterising low-aspect-ratio reversed-field pinch plasma

    International Nuclear Information System (INIS)

    Onchi, T; Fujisawa, A; Sanpei, A; Himura, H; Masamune, S

    2017-01-01

    Permutation entropy and statistical complexity are measures for complex time series. The Bandt–Pompe methodology evaluates probability distribution using permutation. The method is robust and effective to quantify information of time series data. Statistical complexity is the product of Jensen–Shannon divergence and permutation entropy. These physical parameters are introduced to analyse time series of emission and magnetic fluctuations in low-aspect-ratio reversed-field pinch (RFP) plasma. The observed time-series data aggregates in a region of the plane, the so-called C – H plane, determined by entropy versus complexity. The C – H plane is a representation space used for distinguishing periodic, chaos, stochastic and noisy processes of time series data. The characteristics of the emissions and magnetic fluctuation change under different RFP-plasma conditions. The statistical complexities of soft x-ray emissions and magnetic fluctuations depend on the relationships between reversal and pinch parameters. (paper)

  3. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS

    Directory of Open Access Journals (Sweden)

    Moshen Kuai

    2018-03-01

    Full Text Available For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN Adaptive Neuro-fuzzy Inference System (ANFIS in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  4. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS.

    Science.gov (United States)

    Kuai, Moshen; Cheng, Gang; Pang, Yusong; Li, Yong

    2018-03-05

    For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) Adaptive Neuro-fuzzy Inference System (ANFIS) in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF) and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  5. Permutation groups and transformation semigroups : results and problems

    OpenAIRE

    Araujo, Joao; Cameron, Peter Jephson

    2015-01-01

    J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group $G$ and a single non-permutation $a$. Our particular concern is the influence that properties of $G$ (related to homogeneity, trans...

  6. Refined composite multiscale weighted-permutation entropy of financial time series

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  7. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  8. Generalized composite multiscale permutation entropy and Laplacian score based rolling bearing fault diagnosis

    Science.gov (United States)

    Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng

    2018-01-01

    Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.

  9. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    Science.gov (United States)

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  10. Determining the parity of a permutation using an experimental NMR qutrit

    International Nuclear Information System (INIS)

    Dogra, Shruti; Arvind,; Dorai, Kavita

    2014-01-01

    We present the NMR implementation of a recently proposed quantum algorithm to find the parity of a permutation. In the usual qubit model of quantum computation, it is widely believed that computational speedup requires the presence of entanglement and thus cannot be achieved by a single qubit. On the other hand, a qutrit is qualitatively more quantum than a qubit because of the existence of quantum contextuality and a single qutrit can be used for computing. We use the deuterium nucleus oriented in a liquid crystal as the experimental qutrit. This is the first experimental exploitation of a single qutrit to carry out a computational task. - Highlights: • NMR implementation of a quantum algorithm to determine the parity of a permutation. • Algorithm implemented on a single qutrit. • Computational speedup achieved without quantum entanglement. • Single qutrit shows quantum contextuality

  11. NDPA: A generalized efficient parallel in-place N-Dimensional Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Elsayed Ali

    2015-09-01

    Full Text Available N-dimensional transpose/permutation is a very important operation in many large-scale data intensive and scientific applications. These applications include but not limited to oil industry i.e. seismic data processing, nuclear medicine, media production, digital signal processing and business intelligence. This paper proposes an efficient in-place N-dimensional permutation algorithm. The algorithm is based on a novel 3D transpose algorithm that was published recently. The proposed algorithm has been tested on 3D, 4D, 5D, 6D and 7D data sets as a proof of concept. This is the first contribution which is breaking the dimensions’ limitation of the base algorithm. The suggested algorithm exploits the idea of mixing both logical and physical permutations together. In the logical permutation, the address map is transposed for each data unit access. In the physical permutation, actual data elements are swapped. Both permutation levels exploit the fast on-chip memory bandwidth by transferring large amount of data and allowing for fine-grain SIMD (Single Instruction, Multiple Data operations. Thus, the performance is improved as evident from the experimental results section. The algorithm is implemented on NVidia GeForce GTS 250 GPU (Graphics Processing Unit containing 128 cores. The rapid increase in GPUs performance coupled with the recent and continuous improvements in its programmability proved that GPUs are the right choice for computationally demanding tasks. The use of GPUs is the second contribution which reflects how strongly they fit for high performance tasks. The third contribution is improving the proposed algorithm performance to its peak as discussed in the results section.

  12. Permutational symmetries for coincidence rates in multimode multiphotonic interferometry

    Science.gov (United States)

    Khalid, Abdullah; Spivak, Dylan; Sanders, Barry C.; de Guise, Hubert

    2018-06-01

    We obtain coincidence rates for passive optical interferometry by exploiting the permutational symmetries of partially distinguishable input photons, and our approach elucidates qualitative features of multiphoton coincidence landscapes. We treat the interferometer input as a product state of any number of photons in each input mode with photons distinguished by their arrival time. Detectors at the output of the interferometer count photons from each output mode over a long integration time. We generalize and prove the claim of Tillmann et al. [Phys. Rev. X 5, 041015 (2015), 10.1103/PhysRevX.5.041015] that coincidence rates can be elegantly expressed in terms of immanants. Immanants are functions of matrices that exhibit permutational symmetries and the immanants appearing in our coincidence-rate expressions share permutational symmetries with the input state. Our results are obtained by employing representation theory of the symmetric group to analyze systems of an arbitrary number of photons in arbitrarily sized interferometers.

  13. Permuted tRNA genes of Cyanidioschyzon merolae, the origin of the tRNA molecule and the root of the Eukarya domain.

    Science.gov (United States)

    Di Giulio, Massimo

    2008-08-07

    An evolutionary analysis is conducted on the permuted tRNA genes of Cyanidioschyzon merolae, in which the 5' half of the tRNA molecule is codified at the 3' end of the gene and its 3' half is codified at the 5' end. This analysis has shown that permuted genes cannot be considered as derived traits but seem to possess characteristics that suggest they are ancestral traits, i.e. they originated when tRNA molecule genes originated for the first time. In particular, if the hypothesis that permuted genes are a derived trait were true, then we should not have been able to observe that the most frequent class of permuted genes is that of the anticodon loop type, for the simple reason that this class would derive by random permutation from a class of non-permuted tRNA genes, which instead is the rarest. This would not explain the high frequency with which permuted tRNA genes with perfectly separate 5' and 3' halves were observed. Clearly the mechanism that produced this class of permuted genes would envisage the existence, in an advanced stage of evolution, of minigenes codifying for the 5' and 3' halves of tRNAs which were assembled in a permuted way at the origin of the tRNA molecule, thus producing a high frequency of permuted genes of the class here referred. Therefore, this evidence supports the hypothesis that the genes of the tRNA molecule were assembled by minigenes codifying for hairpin-like RNA molecules, as suggested by one model for the origin of tRNA [Di Giulio, M., 1992. On the origin of the transfer RNA molecule. J. Theor. Biol. 159, 199-214; Di Giulio, M., 1999. The non-monophyletic origin of tRNA molecule. J. Theor. Biol. 197, 403-414]. Moreover, the late assembly of the permuted genes of C. merolae, as well as their ancestrality, strengthens the hypothesis of the polyphyletic origins of these genes. Finally, on the basis of the uniqueness and the ancestrality of these permuted genes, I suggest that the root of the Eukarya domain is in the super

  14. Successful attack on permutation-parity-machine-based neural cryptography.

    Science.gov (United States)

    Seoane, Luís F; Ruttor, Andreas

    2012-02-01

    An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.

  15. Computing the Jones index of quadratic permutation endomorphisms of O2

    DEFF Research Database (Denmark)

    Szymanski, Wojciech; Conti, Roberto

    2009-01-01

    We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January......We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January...

  16. Analysis of crude oil markets with improved multiscale weighted permutation entropy

    Science.gov (United States)

    Niu, Hongli; Wang, Jun; Liu, Cheng

    2018-03-01

    Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.

  17. Sorting signed permutations by inversions in O(nlogn) time.

    Science.gov (United States)

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  18. On permutation polynomials over finite fields: differences and iterations

    DEFF Research Database (Denmark)

    Anbar Meidl, Nurdagül; Odzak, Almasa; Patel, Vandita

    2017-01-01

    The Carlitz rank of a permutation polynomial f over a finite field Fq is a simple concept that was introduced in the last decade. Classifying permutations over Fq with respect to their Carlitz ranks has some advantages, for instance f with a given Carlitz rank can be approximated by a rational li...

  19. Permutation-invariant distance between atomic configurations

    Science.gov (United States)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-09-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.

  20. Permutation-invariant distance between atomic configurations

    International Nuclear Information System (INIS)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-01-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

  1. Ordered groups and infinite permutation groups

    CERN Document Server

    1996-01-01

    The subjects of ordered groups and of infinite permutation groups have long en­ joyed a symbiotic relationship. Although the two subjects come from very different sources, they have in certain ways come together, and each has derived considerable benefit from the other. My own personal contact with this interaction began in 1961. I had done Ph. D. work on sequence convergence in totally ordered groups under the direction of Paul Conrad. In the process, I had encountered "pseudo-convergent" sequences in an ordered group G, which are like Cauchy sequences, except that the differences be­ tween terms of large index approach not 0 but a convex subgroup G of G. If G is normal, then such sequences are conveniently described as Cauchy sequences in the quotient ordered group GIG. If G is not normal, of course GIG has no group structure, though it is still a totally ordered set. The best that can be said is that the elements of G permute GIG in an order-preserving fashion. In independent investigations around that t...

  2. N ecklaces~ Periodic Points and Permutation Representations

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 11. Necklaces, Periodic Points and Permutation Representations - Fermat's Little Theorem. Somnath Basu Anindita Bose Sumit Kumar Sinha Pankaj Vishe. General Article Volume 6 Issue 11 November 2001 pp 18-26 ...

  3. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  4. Tag-KEM from Set Partial Domain One-Way Permutations

    Science.gov (United States)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  5. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    Science.gov (United States)

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  6. Permutation entropy of fractional Brownian motion and fractional Gaussian noise

    International Nuclear Information System (INIS)

    Zunino, L.; Perez, D.G.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2008-01-01

    We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays

  7. Permutation symmetry and the origin of fermion mass hierarchy

    International Nuclear Information System (INIS)

    Babu, K.S.; Mohapatra, R.N.

    1990-01-01

    A realization of the ''flavor-democracy'' approach to quark and lepton masses is provided in the context of the standard model with a horizontal S 3 permutation symmetry. In this model, t and b quarks pick up mass at the tree level, c, s-quark and τ-lepton masses arise at the one-loop level, u, d, and μ masses at the two-loop level, and the electron mass at the three-loop level, thus reproducing the observed hierarchial structure without fine tuning of the Yukawa couplings. The pattern of quark mixing angles also emerges naturally, with V us ,V cb ∼O(ε), V ub ∼O(ε 2 ), where ε is a loop expansion parameter

  8. On Permuting Cut with Contraction

    OpenAIRE

    Borisavljevic, Mirjana; Dosen, Kosta; Petric, Zoran

    1999-01-01

    The paper presents a cut-elimination procedure for intuitionistic propositional logic in which cut is eliminated directly, without introducing the multiple-cut rule mix, and in which pushing cut above contraction is one of the reduction steps. The presentation of this procedure is preceded by an analysis of Gentzen's mix-elimination procedure, made in the perspective of permuting cut with contraction. It is also shown that in the absence of implication, pushing cut above contraction doesn't p...

  9. A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism

    Science.gov (United States)

    Ye, Ruisong

    2011-10-01

    This paper proposes a novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, in which permuting the positions of image pixels incorporates with changing the gray values of image pixels to confuse the relationship between cipher-image and plain-image. In the permutation process, a generalized Arnold map is utilized to generate one chaotic orbit used to get two index order sequences for the permutation of image pixel positions; in the diffusion process, a generalized Arnold map and a generalized Bernoulli shift map are employed to yield two pseudo-random gray value sequences for a two-way diffusion of gray values. The yielded gray value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results are carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack as well.

  10. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  11. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  12. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  13. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  14. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  15. Permutation Entropy: New Ideas and Challenges

    Directory of Open Access Journals (Sweden)

    Karsten Keller

    2017-03-01

    Full Text Available Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data.

  16. A Weak Quantum Blind Signature with Entanglement Permutation

    Science.gov (United States)

    Lou, Xiaoping; Chen, Zhigang; Guo, Ying

    2015-09-01

    Motivated by the permutation encryption algorithm, a weak quantum blind signature (QBS) scheme is proposed. It involves three participants, including the sender Alice, the signatory Bob and the trusted entity Charlie, in four phases, i.e., initializing phase, blinding phase, signing phase and verifying phase. In a small-scale quantum computation network, Alice blinds the message based on a quantum entanglement permutation encryption algorithm that embraces the chaotic position string. Bob signs the blinded message with private parameters shared beforehand while Charlie verifies the signature's validity and recovers the original message. Analysis shows that the proposed scheme achieves the secure blindness for the signer and traceability for the message owner with the aid of the authentic arbitrator who plays a crucial role when a dispute arises. In addition, the signature can neither be forged nor disavowed by the malicious attackers. It has a wide application to E-voting and E-payment system, etc.

  17. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  18. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  19. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  20. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  1. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  2. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  3. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  4. Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows

    DEFF Research Database (Denmark)

    Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl

    2015-01-01

    Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted...... by the operation resembling ShiftRows has never been studied in depth. Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear...... normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry...

  5. Infinity-Norm Permutation Covering Codes from Cyclic Groups

    OpenAIRE

    Karni, Ronen; Schwartz, Moshe

    2017-01-01

    We study covering codes of permutations with the $\\ell_\\infty$-metric. We provide a general code construction, which uses smaller building-block codes. We study cyclic transitive groups as building blocks, determining their exact covering radius, and showing linear-time algorithms for finding a covering codeword. We also bound the covering radius of relabeled cyclic transitive groups under conjugation.

  6. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  7. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  8. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Simultaneous and Sequential MS/MS Scan Combinations and Permutations in a Linear Quadrupole Ion Trap.

    Science.gov (United States)

    Snyder, Dalton T; Szalwinski, Lucas J; Cooks, R Graham

    2017-10-17

    Methods of performing precursor ion scans as well as neutral loss scans in a single linear quadrupole ion trap have recently been described. In this paper we report methodology for performing permutations of MS/MS scan modes, that is, ordered combinations of precursor, product, and neutral loss scans following a single ion injection event. Only particular permutations are allowed; the sequences demonstrated here are (1) multiple precursor ion scans, (2) precursor ion scans followed by a single neutral loss scan, (3) precursor ion scans followed by product ion scans, and (4) segmented neutral loss scans. (5) The common product ion scan can be performed earlier in these sequences, under certain conditions. Simultaneous scans can also be performed. These include multiple precursor ion scans, precursor ion scans with an accompanying neutral loss scan, and multiple neutral loss scans. We argue that the new capability to perform complex simultaneous and sequential MS n operations on single ion populations represents a significant step in increasing the selectivity of mass spectrometry.

  10. Optimization and experimental realization of the quantum permutation algorithm

    Science.gov (United States)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  11. Sampling solution traces for the problem of sorting permutations by signed reversals

    Science.gov (United States)

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results

  12. Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach

    International Nuclear Information System (INIS)

    Rouzo, H.L.

    1979-01-01

    The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references

  13. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  14. Discrete Chebyshev nets and a universal permutability theorem

    International Nuclear Information System (INIS)

    Schief, W K

    2007-01-01

    The Pohlmeyer-Lund-Regge system which was set down independently in the contexts of Lagrangian field theories and the relativistic motion of a string and which played a key role in the development of a geometric interpretation of soliton theory is known to appear in a variety of important guises such as the vectorial Lund-Regge equation, the O(4) nonlinear σ-model and the SU(2) chiral model. Here, it is demonstrated that these avatars may be discretized in such a manner that both integrability and equivalence are preserved. The corresponding discretization procedure is geometric and algebraic in nature and based on discrete Chebyshev nets and generalized discrete Lelieuvre formulae. In connection with the derivation of associated Baecklund transformations, it is shown that a generalized discrete Lund-Regge equation may be interpreted as a universal permutability theorem for integrable equations which admit commuting matrix Darboux transformations acting on su(2) linear representations. Three-dimensional coordinate systems and lattices of 'Lund-Regge' type related to particular continuous and discrete Zakharov-Manakov systems are obtained as a by-product of this analysis

  15. Discriminating chaotic and stochastic dynamics through the permutation spectrum test

    Energy Technology Data Exchange (ETDEWEB)

    Kulp, C. W., E-mail: Kulp@lycoming.edu [Department of Astronomy and Physics, Lycoming College, Williamsport, Pennsylvania 17701 (United States); Zunino, L., E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata—CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina)

    2014-09-01

    In this paper, we propose a new heuristic symbolic tool for unveiling chaotic and stochastic dynamics: the permutation spectrum test. Several numerical examples allow us to confirm the usefulness of the introduced methodology. Indeed, we show that it is robust in situations in which other techniques fail (intermittent chaos, hyperchaotic dynamics, stochastic linear and nonlinear correlated dynamics, and deterministic non-chaotic noise-driven dynamics). We illustrate the applicability and reliability of this pragmatic method by examining real complex time series from diverse scientific fields. Taking into account that the proposed test has the advantages of being conceptually simple and computationally fast, we think that it can be of practical utility as an alternative test for determinism.

  16. A permutation testing framework to compare groups of brain networks.

    Science.gov (United States)

    Simpson, Sean L; Lyday, Robert G; Hayasaka, Satoru; Marsh, Anthony P; Laurienti, Paul J

    2013-01-01

    Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.

  17. Symbolic Detection of Permutation and Parity Symmetries of Evolution Equations

    KAUST Repository

    Alghamdi, Moataz

    2017-06-18

    We introduce a symbolic computational approach to detecting all permutation and parity symmetries in any general evolution equation, and to generating associated invariant polynomials, from given monomials, under the action of these symmetries. Traditionally, discrete point symmetries of differential equations are systemically found by solving complicated nonlinear systems of partial differential equations; in the presence of Lie symmetries, the process can be simplified further. Here, we show how to find parity- and permutation-type discrete symmetries purely based on algebraic calculations. Furthermore, we show that such symmetries always form groups, thereby allowing for the generation of new group-invariant conserved quantities from known conserved quantities. This work also contains an implementation of the said results in Mathematica. In addition, it includes, as a motivation for this work, an investigation of the connection between variational symmetries, described by local Lie groups, and conserved quantities in Hamiltonian systems.

  18. Magic informationally complete POVMs with permutations

    Science.gov (United States)

    Planat, Michel; Gedik, Zafer

    2017-09-01

    Eigenstates of permutation gates are either stabilizer states (for gates in the Pauli group) or magic states, thus allowing universal quantum computation (Planat, Rukhsan-Ul-Haq 2017 Adv. Math. Phys. 2017, 5287862 (doi:10.1155/2017/5287862)). We show in this paper that a subset of such magic states, when acting on the generalized Pauli group, define (asymmetric) informationally complete POVMs. Such informationally complete POVMs, investigated in dimensions 2-12, exhibit simple finite geometries in their projector products and, for dimensions 4 and 8 and 9, relate to two-qubit, three-qubit and two-qutrit contextuality.

  19. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    Science.gov (United States)

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational

  20. Tolerance of a knotted near infrared fluorescent protein to random circular permutation

    Science.gov (United States)

    Pandey, Naresh; Kuypers, Brianna E.; Nassif, Barbara; Thomas, Emily E.; Alnahhas, Razan N.; Segatori, Laura; Silberg, Jonathan J.

    2016-01-01

    Bacteriophytochrome photoreceptors (BphP) are knotted proteins that have been developed as near-infrared fluorescent protein (iRFP) reporters of gene expression. To explore how rearrangements in the peptides that interlace into the knot within the BphP photosensory core affect folding, we subjected iRFP to random circular permutation using an improved transposase mutagenesis strategy and screened for variants that fluoresce. We identified twenty seven circularly permuted iRFP that display biliverdin-dependent fluorescence in Escherichia coli. The variants with the brightest whole cell fluorescence initiated translation at residues near the domain linker and knot tails, although fluorescent variants were discovered that initiated translation within the PAS and GAF domains. Circularly permuted iRFP retained sufficient cofactor affinity to fluoresce in tissue culture without the addition of biliverdin, and one variant displayed enhanced fluorescence when expressed in bacteria and tissue culture. This variant displayed a similar quantum yield as iRFP, but exhibited increased resistance to chemical denaturation, suggesting that the observed signal increase arose from more efficient protein maturation. These results show how the contact order of a knotted BphP can be altered without disrupting chromophore binding and fluorescence, an important step towards the creation of near-infrared biosensors with expanded chemical-sensing functions for in vivo imaging. PMID:27304983

  1. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. On the representation matrices of the spin permutation group. [for atomic and molecular electronic structures

    Science.gov (United States)

    Wilson, S.

    1977-01-01

    A method is presented for the determination of the representation matrices of the spin permutation group (symmetric group), a detailed knowledge of these matrices being required in the study of the electronic structure of atoms and molecules. The method is characterized by the use of two different coupling schemes. Unlike the Yamanouchi spin algebraic scheme, the method is not recursive. The matrices for the fundamental transpositions can be written down directly in one of the two bases. The method results in a computationally significant reduction in the number of matrix elements that have to be stored when compared with, say, the standard Young tableaux group theoretical approach.

  3. Permutation Entropy for Random Binary Sequences

    Directory of Open Access Journals (Sweden)

    Lingfeng Liu

    2015-12-01

    Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.

  4. FORMULASI MODEL PERMUTASI SIKLIS DENGAN OBJEK MULTINOMIAL

    Directory of Open Access Journals (Sweden)

    Sukma Adi Perdana

    2016-10-01

    Full Text Available Penelitian ini bertujuan membangun model matematika untuk menghitung jumlah susunan objek dari permutasi siklis yang memiliki objek multinomial. Model yang dibangun dibatasi untuk permutasi siklis yang memiliki objek multinomial dengan minimal ada satu jenis objek beranggotakan tunggal. Pemodelan dilakukan berdasarkan struktur matematika dari permutasi siklis dan permutasi multinomial. Model permutasi siklis yang memiliki objek multinomial telah dirumuskan.   Pembuktian model telah dilakukan melalui validasi struktur serta validasi hasil yang dilakukan dengan cara membandingkan hasil perhitungan model dan hasil pencacahan. Teorema tentang permutasi siklis dengan objek multinomial juga telah dibangun. Kata kunci:  pemodelan , permutasi siklis, permutasi multinomial This study aims at constructing mathematical model to count the number of arrangement of objects form cyclical permutation that has multinomial objects. The model constructed is limited to cyclical permutation that has multinomial object in which at least one kind of object having single cardinality is contained within. Modelling is undertaken based on mathematical structure of cyclical permutation and multinomial permutation. Cyclical permutation model having multinomial object has been formulated as . The proof of the model has been undertaken by validating structure and validating the outcome which was conducted by comparing counting result of model and counting result manually. The theorem of cyclical permutation with multinomial object has also been developed. Keywords: modelling, cyclical permutation, multinomial permutation

  5. Permutation 2-groups I: structure and splitness

    OpenAIRE

    Elgueta, Josep

    2013-01-01

    By a 2-group we mean a groupoid equipped with a weakened group structure. It is called split when it is equivalent to the semidirect product of a discrete 2-group and a one-object 2-group. By a permutation 2-group we mean the 2-group $\\mathbb{S}ym(\\mathcal{G})$ of self-equivalences of a groupoid $\\mathcal{G}$ and natural isomorphisms between them, with the product given by composition of self-equivalences. These generalize the symmetric groups $\\mathsf{S}_n$, $n\\geq 1$, obtained when $\\mathca...

  6. A permutationally invariant full-dimensional ab initio potential energy surface for the abstraction and exchange channels of the H + CH4 system

    International Nuclear Information System (INIS)

    Li, Jun; Chen, Jun; Zhao, Zhiqiang; Zhang, Dong H.; Xie, Daiqian; Guo, Hua

    2015-01-01

    We report a permutationally invariant global potential energy surface (PES) for the H + CH 4 system based on ∼63 000 data points calculated at a high ab initio level (UCCSD(T)-F12a/AVTZ) using the recently proposed permutation invariant polynomial-neural network method. The small fitting error (5.1 meV) indicates a faithful representation of the ab initio points over a large configuration space. The rate coefficients calculated on the PES using tunneling corrected transition-state theory and quasi-classical trajectory are found to agree well with the available experimental and previous quantum dynamical results. The calculated total reaction probabilities (J tot = 0) including the abstraction and exchange channels using the new potential by a reduced dimensional quantum dynamic method are essentially the same as those on the Xu-Chen-Zhang PES [Chin. J. Chem. Phys. 27, 373 (2014)

  7. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    International Nuclear Information System (INIS)

    De Vogelaere, Frederick; Vingerhoets, Guy; Santens, Patrick; Boon, Paul; Achten, Erik

    2010-01-01

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  8. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    Energy Technology Data Exchange (ETDEWEB)

    De Vogelaere, Frederick; Vingerhoets, Guy [Ghent University, Laboratory for Neuropsychology, Department of Neurology, Ghent (Belgium); Santens, Patrick; Boon, Paul [Ghent University Hospital, Department of Neurology, Ghent (Belgium); Achten, Erik [Ghent University Hospital, Department of Radiology, Ghent (Belgium)

    2010-01-15

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  9. Tolerance of a Knotted Near-Infrared Fluorescent Protein to Random Circular Permutation.

    Science.gov (United States)

    Pandey, Naresh; Kuypers, Brianna E; Nassif, Barbara; Thomas, Emily E; Alnahhas, Razan N; Segatori, Laura; Silberg, Jonathan J

    2016-07-12

    Bacteriophytochrome photoreceptors (BphP) are knotted proteins that have been developed as near-infrared fluorescent protein (iRFP) reporters of gene expression. To explore how rearrangements in the peptides that interlace into the knot within the BphP photosensory core affect folding, we subjected iRFPs to random circular permutation using an improved transposase mutagenesis strategy and screened for variants that fluoresce. We identified 27 circularly permuted iRFPs that display biliverdin-dependent fluorescence in Escherichia coli. The variants with the brightest whole cell fluorescence initiated translation at residues near the domain linker and knot tails, although fluorescent variants that initiated translation within the PAS and GAF domains were discovered. Circularly permuted iRFPs retained sufficient cofactor affinity to fluoresce in tissue culture without the addition of biliverdin, and one variant displayed enhanced fluorescence when expressed in bacteria and tissue culture. This variant displayed a quantum yield similar to that of iRFPs but exhibited increased resistance to chemical denaturation, suggesting that the observed increase in the magnitude of the signal arose from more efficient protein maturation. These results show how the contact order of a knotted BphP can be altered without disrupting chromophore binding and fluorescence, an important step toward the creation of near-infrared biosensors with expanded chemical sensing functions for in vivo imaging.

  10. Diversification of Protein Cage Structure Using Circularly Permuted Subunits.

    Science.gov (United States)

    Azuma, Yusuke; Herger, Michael; Hilvert, Donald

    2018-01-17

    Self-assembling protein cages are useful as nanoscale molecular containers for diverse applications in biotechnology and medicine. To expand the utility of such systems, there is considerable interest in customizing the structures of natural cage-forming proteins and designing new ones. Here we report that a circularly permuted variant of lumazine synthase, a cage-forming enzyme from Aquifex aeolicus (AaLS) affords versatile building blocks for the construction of nanocompartments that can be easily produced, tailored, and diversified. The topologically altered protein, cpAaLS, self-assembles into spherical and tubular cage structures with morphologies that can be controlled by the length of the linker connecting the native termini. Moreover, cpAaLS proteins integrate into wild-type and other engineered AaLS assemblies by coproduction in Escherichia coli to form patchwork cages. This coassembly strategy enables encapsulation of guest proteins in the lumen, modification of the exterior through genetic fusion, and tuning of the size and electrostatics of the compartments. This addition to the family of AaLS cages broadens the scope of this system for further applications and highlights the utility of circular permutation as a potentially general strategy for tailoring the properties of cage-forming proteins.

  11. Transformative decision rules, permutability, and non-sequential framing of decision problems

    NARCIS (Netherlands)

    Peterson, M.B.

    2004-01-01

    The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A

  12. Multiple comparisons permutation test for image based data mining in radiotherapy

    NARCIS (Netherlands)

    Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel

    2013-01-01

    : Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling

  13. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  14. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  15. A permutation information theory tour through different interest rate maturities: the Libor case.

    Science.gov (United States)

    Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A

    2015-12-13

    This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).

  16. Information transmission and signal permutation in active flow networks

    Science.gov (United States)

    Woodhouse, Francis G.; Fawcett, Joanna B.; Dunkel, Jörn

    2018-03-01

    Recent experiments show that both natural and artificial microswimmers in narrow channel-like geometries will self-organise to form steady, directed flows. This suggests that networks of flowing active matter could function as novel autonomous microfluidic devices. However, little is known about how information propagates through these far-from-equilibrium systems. Through a mathematical analogy with spin-ice vertex models, we investigate here the input–output characteristics of generic incompressible active flow networks (AFNs). Our analysis shows that information transport through an AFN is inherently different from conventional pressure or voltage driven networks. Active flows on hexagonal arrays preserve input information over longer distances than their passive counterparts and are highly sensitive to bulk topological defects, whose presence can be inferred from marginal input–output distributions alone. This sensitivity further allows controlled permutations on parallel inputs, revealing an unexpected link between active matter and group theory that can guide new microfluidic mixing strategies facilitated by active matter and aid the design of generic autonomous information transport networks.

  17. A Novel Method of Fault Diagnosis for Rolling Bearing Based on Dual Tree Complex Wavelet Packet Transform and Improved Multiscale Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Guiji Tang

    2016-01-01

    Full Text Available A novel method of fault diagnosis for rolling bearing, which combines the dual tree complex wavelet packet transform (DTCWPT, the improved multiscale permutation entropy (IMPE, and the linear local tangent space alignment (LLTSA with the extreme learning machine (ELM, is put forward in this paper. In this method, in order to effectively discover the underlying feature information, DTCWPT, which has the attractive properties as nearly shift invariance and reduced aliasing, is firstly utilized to decompose the original signal into a set of subband signals. Then, IMPE, which is designed to reduce the variability of entropy measures, is applied to characterize the properties of each obtained subband signal at different scales. Furthermore, the feature vectors are constructed by combining IMPE of each subband signal. After the feature vectors construction, LLTSA is employed to compress the high dimensional vectors of the training and the testing samples into the low dimensional vectors with better distinguishability. Finally, the ELM classifier is used to automatically accomplish the condition identification with the low dimensional feature vectors. The experimental data analysis results validate the effectiveness of the presented diagnosis method and demonstrate that this method can be applied to distinguish the different fault types and fault degrees of rolling bearings.

  18. Transportation Mode Detection Based on Permutation Entropy and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2015-01-01

    Full Text Available With the increasing prevalence of GPS devices and mobile phones, transportation mode detection based on GPS data has been a hot topic in GPS trajectory data analysis. Transportation modes such as walking, driving, bus, and taxi denote an important characteristic of the mobile user. Longitude, latitude, speed, acceleration, and direction are usually used as features in transportation mode detection. In this paper, first, we explore the possibility of using Permutation Entropy (PE of speed, a measure of complexity and uncertainty of GPS trajectory segment, as a feature for transportation mode detection. Second, we employ Extreme Learning Machine (ELM to distinguish GPS trajectory segments of different transportation. Finally, to evaluate the performance of the proposed method, we make experiments on GeoLife dataset. Experiments results show that we can get more than 50% accuracy when only using PE as a feature to characterize trajectory sequence. PE can indeed be effectively used to detect transportation mode from GPS trajectory. The proposed method has much better accuracy and faster running time than the methods based on the other features and SVM classifier.

  19. Generalized permutation symmetry and the flavour problem in SU(2)sub(L)xU(1)

    International Nuclear Information System (INIS)

    Ecker, G.

    1984-01-01

    A generalized permutation group is introduced as a possible horizontal symmetry for SU(2)sub(L)xU(1) gauge theories. It leads to the unique two generation quark mass matrices with a correct prediction for the Cabibbo angle. For three generations the model exhibits spontaneous CP violation, correlates the Kobayashi-Maskawa mixing parameters s 1 and s 3 and predicts an upper bound for the running top quark mass of approximately 45 GeV. The hierarchy of generations is due to a hierarchy of vacuum expectation values rather than of Yukawa coupling constants. (orig.)

  20. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2016-05-01

    Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the

  1. Information sets as permutation cycles for quadratic residue codes

    Directory of Open Access Journals (Sweden)

    Richard A. Jenson

    1982-01-01

    Full Text Available The two cases p=7 and p=23 are the only known cases where the automorphism group of the [p+1,   (p+1/2] extended binary quadratic residue code, O(p, properly contains PSL(2,p. These codes have some of their information sets represented as permutation cycles from Aut(Q(p. Analysis proves that all information sets of Q(7 are so represented but those of Q(23 are not.

  2. Structural analysis of papain-like NlpC/P60 superfamily enzymes with a circularly permuted topology reveals potential lipid binding sites.

    Directory of Open Access Journals (Sweden)

    Qingping Xu

    Full Text Available NlpC/P60 superfamily papain-like enzymes play important roles in all kingdoms of life. Two members of this superfamily, LRAT-like and YaeF/YiiX-like families, were predicted to contain a catalytic domain that is circularly permuted such that the catalytic cysteine is located near the C-terminus, instead of at the N-terminus. These permuted enzymes are widespread in virus, pathogenic bacteria, and eukaryotes. We determined the crystal structure of a member of the YaeF/YiiX-like family from Bacillus cereus in complex with lysine. The structure, which adopts a ligand-induced, "closed" conformation, confirms the circular permutation of catalytic residues. A comparative analysis of other related protein structures within the NlpC/P60 superfamily is presented. Permutated NlpC/P60 enzymes contain a similar conserved core and arrangement of catalytic residues, including a Cys/His-containing triad and an additional conserved tyrosine. More surprisingly, permuted enzymes have a hydrophobic S1 binding pocket that is distinct from previously characterized enzymes in the family, indicative of novel substrate specificity. Further analysis of a structural homolog, YiiX (PDB 2if6 identified a fatty acid in the conserved hydrophobic pocket, thus providing additional insights into possible function of these novel enzymes.

  3. Predecessor and permutation existence problems for sequential dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Rosenkrantz, D. J. (Daniel J.); Stearns, R. E. (Richard E.)

    2002-01-01

    A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was introduced in BMR99, BR991 as a formal model for analyzing simulation systems. An SDS S is a triple (G, F,n ),w here (i) G(V,E ) is an undirected graph with n nodes with each node having a state, (ii) F = (fi, fi, . . ., fn), with fi denoting a function associated with node ui E V and (iii) A is a permutation of (or total order on) the nodes in V, A configuration of an SDS is an n-vector ( b l, bz, . . ., bn), where bi is the value of the state of node vi. A single SDS transition from one configuration to another is obtained by updating the states of the nodes by evaluating the function associated with each of them in the order given by n. Here, we address the complexity of two basic problems and their generalizations for SDSs. Given an SDS S and a configuration C, the PREDECESSOR EXISTENCE (or PRE) problem is to determine whether there is a configuration C' such that S has a transition from C' to C. (If C has no predecessor, C is known as a garden of Eden configuration.) Our results provide separations between efficiently solvable and computationally intractable instances of the PRE problem. For example, we show that the PRE problem can be solved efficiently for SDSs with Boolean state values when the node functions are symmetric and the underlying graph is of bounded treewidth. In contrast, we show that allowing just one non-symmetric node function renders the problem NP-complete even when the underlying graph is a tree (which has a treewidth of 1). We also show that the PRE problem is efficiently solvable for SDSs whose state values are from a field and whose node functions are linear. Some of the polynomial algorithms also extend to the case where we want to find an ancestor configuration that precedes a given configuration by a logarithmic number of steps. Our results extend some of the earlier results by Sutner [Su95] and Green [@87] on the complexity of

  4. Random walk generated by random permutations of {1, 2, 3, ..., n + 1}

    International Nuclear Information System (INIS)

    Oshanin, G; Voituriez, R

    2004-01-01

    We study properties of a non-Markovian random walk X (n) l , l = 0, 1, 2, ..., n, evolving in discrete time l on a one-dimensional lattice of integers, whose moves to the right or to the left are prescribed by the rise-and-descent sequences characterizing random permutations π of [n + 1] = {1, 2, 3, ..., n + 1}. We determine exactly the probability of finding the end-point X n = X (n) n of the trajectory of such a permutation-generated random walk (PGRW) at site X, and show that in the limit n → ∞ it converges to a normal distribution with a smaller, compared to the conventional Polya random walk, diffusion coefficient. We formulate, as well, an auxiliary stochastic process whose distribution is identical to the distribution of the intermediate points X (n) l , l < n, which enables us to obtain the probability measure of different excursions and to define the asymptotic distribution of the number of 'turns' of the PGRW trajectories

  5. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.; Spoto, F.; Scollo, Giuseppe; Nijholt, Antinus

    2003-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq 1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  6. Generating all permutations by context-free grammars in Chomsky normal form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2006-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq1$, with

  7. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2004-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  8. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  9. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  10. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Permutation entropy analysis of financial time series based on Hill's diversity number

    Science.gov (United States)

    Zhang, Yali; Shang, Pengjian

    2017-12-01

    In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.

  12. Brain computation is organized via power-of-two-based permutation logic

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2016-11-01

    Full Text Available There is considerable scientific interest in understanding how cell assemblies - the long-presumed computational motif - are organized so that the brain can generate cognitive behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N=2i–1, giving rise to the specific-to-general cell-assembly organization capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based computational logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social cognitions. However, modulatory neurons, such as dopaminergic neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact despite the NMDA receptors – the synaptic switch for learning and memory – were deleted throughout adulthood, suggesting that it is likely developmentally pre-configured. Moreover, this logic is implemented in the cortex vertically via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques – which preferentially encode specific and low-combinatorial features and project inter-cortically – is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination, and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the non-randomness in layers 5/6 - which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems – is ideal for robust feedback-control of motivation, emotion, consciousness, and behaviors. These observations suggest that the brain’s basic

  13. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  14. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan

    International Nuclear Information System (INIS)

    Lian Zhigang; Gu Xingsheng; Jiao Bin

    2008-01-01

    It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan

  15. Index of French nuclear literature: IBM 360 programmes for preparing the permuted index of French titles

    International Nuclear Information System (INIS)

    Chonez, Nicole

    1968-12-01

    This report contains the assembly list, the flow chart and some comments about each of the IBM 360 assembler language programmes used for preparing one of the subject indexes contained in the bibliographical bulletin entitled: 'Index de la Litterature nucleaire francaise'; this bulletin has been produced by the French C.E.A. since 1968. Only the processing phases from the magnetic tape file of the bibliographical references, assumed correct, to the printing out of the permuted index obtained with the French titles of the documents on the tape are considered here. This permuted index has the peculiarity of automatically regrouping synonyms and certain grammatical variations of the words. (author) [fr

  16. Permutation entropy with vector embedding delays

    Science.gov (United States)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  17. Multiscale permutation entropy analysis of electrocardiogram

    Science.gov (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao

    2017-04-01

    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  18. Novel Approach for Lithium-Ion Battery On-Line Remaining Useful Life Prediction Based on Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Luping Chen

    2018-04-01

    Full Text Available The degradation of lithium-ion battery often leads to electrical system failure. Battery remaining useful life (RUL prediction can effectively prevent this failure. Battery capacity is usually utilized as health indicator (HI for RUL prediction. However, battery capacity is often estimated on-line and it is difficult to be obtained by monitoring on-line parameters. Therefore, there is a great need to find a simple and on-line prediction method to solve this issue. In this paper, as a novel HI, permutation entropy (PE is extracted from the discharge voltage curve for analyzing battery degradation. Then the similarity between PE and battery capacity are judged by Pearson and Spearman correlation analyses. Experiment results illustrate the effectiveness and excellent similar performance of the novel HI for battery fading indication. Furthermore, we propose a hybrid approach combining Variational mode decomposition (VMD denoising technique, autoregressive integrated moving average (ARIMA, and GM(1,1 models for RUL prediction. Experiment results illustrate the accuracy of the proposed approach for lithium-ion battery on-line RUL prediction.

  19. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    Science.gov (United States)

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  20. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Ruochen Liu

    2013-01-01

    Full Text Available The permutation flow shop scheduling problem (PFSSP is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO based memetic algorithm (MPSOMA is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS and individual improvement scheme (IIS. Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA, on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  1. Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.

    Science.gov (United States)

    Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang

    2015-01-01

    Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.

  2. Remark on Hopf images in quantum permutation groups $S_n^+$

    OpenAIRE

    Józiak, Paweł

    2016-01-01

    Motivated by a question of A.~Skalski and P.M.~So{\\l}tan about inner faithfulness of the S.~Curran's map, we revisit the results and techniques of T.~Banica and J.~Bichon's Crelle paper and study some group-theoretic properties of the quantum permutation group on $4$ points. This enables us not only to answer the aforementioned question in positive in case $n=4, k=2$, but also to classify the automorphisms of $S_4^+$, describe all the embeddings $O_{-1}(2)\\subset S_4^+$ and show that all the ...

  3. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    Science.gov (United States)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  4. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  5. A Colour Image Encryption Scheme Using Permutation-Substitution Based on Chaos

    Directory of Open Access Journals (Sweden)

    Xing-Yuan Wang

    2015-06-01

    Full Text Available An encryption scheme for colour images using a spatiotemporal chaotic system is proposed. Initially, we use the R, G and B components of a colour plain-image to form a matrix. Then the matrix is permutated by using zigzag path scrambling. The resultant matrix is then passed through a substitution process. Finally, the ciphered colour image is obtained from the confused matrix. Theoretical analysis and experimental results indicate that the proposed scheme is both secure and practical, which make it suitable for encrypting colour images of any size.

  6. A Studentized Permutation Test for the Comparison of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Hahn, Ute

    of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable....... It also shows improved behavior compared to a test suggested by Diggle et al. (1991, 2000) for the comparison of groups of independently replicated point patterns. In an application to two point patterns from pathology that represent capillary positions in sections of healthy and tumorous tissue, our...

  7. PerMallows: An R Package for Mallows and Generalized Mallows Models

    Directory of Open Access Journals (Sweden)

    Ekhine Irurozki

    2016-08-01

    Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.

  8. A simplified formalism of the algebra of partially transposed permutation operators with applications

    Science.gov (United States)

    Mozrzymas, Marek; Studziński, Michał; Horodecki, Michał

    2018-03-01

    Herein we continue the study of the representation theory of the algebra of permutation operators acting on the n -fold tensor product space, partially transposed on the last subsystem. We develop the concept of partially reduced irreducible representations, which allows us to significantly simplify previously proved theorems and, most importantly, derive new results for irreducible representations of the mentioned algebra. In our analysis we are able to reduce the complexity of the central expressions by getting rid of sums over all permutations from the symmetric group, obtaining equations which are much more handy in practical applications. We also find relatively simple matrix representations for the generators of the underlying algebra. The obtained simplifications and developments are applied to derive the characteristics of a deterministic port-based teleportation scheme written purely in terms of irreducible representations of the studied algebra. We solve an eigenproblem for the generators of the algebra, which is the first step towards a hybrid port-based teleportation scheme and gives us new proofs of the asymptotic behaviour of teleportation fidelity. We also show a connection between the density operator characterising port-based teleportation and a particular matrix composed of an irreducible representation of the symmetric group, which encodes properties of the investigated algebra.

  9. Towards tricking a pathogen's protease into fighting infection: the 3D structure of a stable circularly permuted onconase variant cleavedby HIV-1 protease.

    Directory of Open Access Journals (Sweden)

    Mariona Callís

    Full Text Available Onconase® is a highly cytotoxic amphibian homolog of Ribonuclease A. Here, we describe the construction of circularly permuted Onconase® variants by connecting the N- and C-termini of this enzyme with amino acid residues that are recognized and cleaved by the human immunodeficiency virus protease. Uncleaved circularly permuted Onconase® variants are unusually stable, non-cytotoxic and can internalize in human T-lymphocyte Jurkat cells. The structure, stability and dynamics of an intact and a cleaved circularly permuted Onconase® variant were determined by Nuclear Magnetic Resonance spectroscopy and provide valuable insight into the changes in catalytic efficiency caused by the cleavage. The understanding of the structural environment and the dynamics of the activation process represents a first step toward the development of more effective drugs for the treatment of diseases related to pathogens expressing a specific protease. By taking advantage of the protease's activity to initiate a cytotoxic cascade, this approach is thought to be less susceptible to known resistance mechanisms.

  10. Characterization of the permutations by block that have reversible one dimensional cellular automata; Caracterizacion de las permutaciones en bloque que representan automatas celulares unidimensionales reversibles

    Energy Technology Data Exchange (ETDEWEB)

    Seck Tuoh Mora, J. C. [Instituto Politecnico Nacional, Mexico, D. F. (Mexico)

    2001-06-01

    We present a review of reversible one dimensional cellular automata and their representation by block permutations. We analyze in detail the behavior of such block permutations to get their characterization. [Spanish] En el siguiente escrito se da una revision a la representacion y comportamiento de automatas celulares unidimensionales reversibles por medio de permutaciones en bloque. Hacemos un analisis detallado del comportamiento de dichas permutaciones para obtener su caracterizacion.

  11. Research on the Construction Method of Square Permutation Foundation in Photovoltaic Power Station%光伏电站方阵基础的施工方法探讨

    Institute of Scientific and Technical Information of China (English)

    余祖良; 陈魁

    2013-01-01

      通过太阳能电池方阵将太阳能辐射能转换为电能的发电站称为太阳能光伏电站,太阳能光伏电站按照运行方式可分为独立太阳能光伏电站和并网太阳能光伏电站。文章简述了光伏电站方阵基础的几种施工方法。%Taking solar and radiant energy into electric energy by solar arrays is defined as solar photovoltaic power station. It can be categorized into independent solar photo voltaic power station and grid-combined solar photo voltaic power station in its own way. Several construction methods with square permutation foundation in photovoltaic power station are elaborated.

  12. Molecular symmetry: Why permutation-inversion (PI) groups don't render the point groups obsolete

    Science.gov (United States)

    Groner, Peter

    2018-01-01

    The analysis of spectra of molecules with internal large-amplitude motions (LAMs) requires molecular symmetry (MS) groups that are larger than and significantly different from the more familiar point groups. MS groups are described often by the permutation-inversion (PI) group method. It is shown that point groups still can and should play a significant role together with the PI groups for a class of molecules with internal rotors. In molecules of this class, several simple internal rotors are attached to a rigid molecular frame. The PI groups for this class are semidirect products like H ^ F, where the invariant subgroup H is a direct product of cyclic groups and F is a point group. This result is used to derive meaningful labels for MS groups, and to derive correlation tables between MS groups and point groups. MS groups of this class have many parallels to space groups of crystalline solids.

  13. A Scalable Permutation Approach Reveals Replication and Preservation Patterns of Network Modules in Large Datasets.

    Science.gov (United States)

    Ritchie, Scott C; Watts, Stephen; Fearnley, Liam G; Holt, Kathryn E; Abraham, Gad; Inouye, Michael

    2016-07-01

    Network modules-topologically distinct groups of edges and nodes-that are preserved across datasets can reveal common features of organisms, tissues, cell types, and molecules. Many statistics to identify such modules have been developed, but testing their significance requires heuristics. Here, we demonstrate that current methods for assessing module preservation are systematically biased and produce skewed p values. We introduce NetRep, a rapid and computationally efficient method that uses a permutation approach to score module preservation without assuming data are normally distributed. NetRep produces unbiased p values and can distinguish between true and false positives during multiple hypothesis testing. We use NetRep to quantify preservation of gene coexpression modules across murine brain, liver, adipose, and muscle tissues. Complex patterns of multi-tissue preservation were revealed, including a liver-derived housekeeping module that displayed adipose- and muscle-specific association with body weight. Finally, we demonstrate the broader applicability of NetRep by quantifying preservation of bacterial networks in gut microbiota between men and women. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  15. True-and-error models violate independence and yet they are testable

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2013-11-01

    Full Text Available Birnbaum (2011 criticized tests of transitivity that are based entirely on binary choice proportions. When assumptions of independence and stationarity (iid of choice responses are violated, choice proportions could lead to wrong conclusions. Birnbaum (2012a proposed two statistics (correlation and variance of preference reversals to test iid, using random permutations to simulate p-values. Cha, Choi, Guo, Regenwetter, and Zwilling (2013 defended methods based on marginal proportions but conceded that such methods wrongly diagnose hypothetical examples of Birnbaum (2012a. However, they also claimed that ``true and error'' models also satisfy independence and also fail in such cases unless they become untestable. This article presents correct true-and-error models; it shows how these models violate iid, how they might correctly identify cases that would be misdiagnosed by marginal proportions, and how they can be tested and rejected. This note also refutes other arguments of Cha et al. (2013, including contentions that other tests failed to violate iid ``with flying colors'', that violations of iid ``do not replicate'', that type I errors are not appropriately estimated by the permutation method, and that independence assumptions are not critical to interpretation of marginal choice proportions.

  16. Electromyographic permutation entropy quantifies diaphragmatic denervation and reinnervation.

    Directory of Open Access Journals (Sweden)

    Christopher Kramer

    Full Text Available Spontaneous reinnervation after diaphragmatic paralysis due to trauma, surgery, tumors and spinal cord injuries is frequently observed. A possible explanation could be collateral reinnervation, since the diaphragm is commonly double-innervated by the (accessory phrenic nerve. Permutation entropy (PeEn, a complexity measure for time series, may reflect a functional state of neuromuscular transmission by quantifying the complexity of interactions across neural and muscular networks. In an established rat model, electromyographic signals of the diaphragm after phrenicotomy were analyzed using PeEn quantifying denervation and reinnervation. Thirty-three anesthetized rats were unilaterally phrenicotomized. After 1, 3, 9, 27 and 81 days, diaphragmatic electromyographic PeEn was analyzed in vivo from sternal, mid-costal and crural areas of both hemidiaphragms. After euthanasia of the animals, both hemidiaphragms were dissected for fiber type evaluation. The electromyographic incidence of an accessory phrenic nerve was 76%. At day 1 after phrenicotomy, PeEn (normalized values was significantly diminished in the sternal (median: 0.69; interquartile range: 0.66-0.75 and mid-costal area (0.68; 0.66-0.72 compared to the non-denervated side (0.84; 0.78-0.90 at threshold p<0.05. In the crural area, innervated by the accessory phrenic nerve, PeEn remained unchanged (0.79; 0.72-0.86. During reinnervation over 81 days, PeEn normalized in the mid-costal area (0.84; 0.77-0.86, whereas it remained reduced in the sternal area (0.77; 0.70-0.81. Fiber type grouping, a histological sign for reinnervation, was found in the mid-costal area in 20% after 27 days and in 80% after 81 days. Collateral reinnervation can restore diaphragm activity after phrenicotomy. Electromyographic PeEn represents a new, distinctive assessment characterizing intramuscular function following denervation and reinnervation.

  17. Structural consequences of cutting a binding loop: two circularly permuted variants of streptavidin

    International Nuclear Information System (INIS)

    Le Trong, Isolde; Chu, Vano; Xing, Yi; Lybrand, Terry P.; Stayton, Patrick S.; Stenkamp, Ronald E.

    2013-01-01

    The crystal structures of two circularly permuted streptavidins probe the role of a flexible loop in the tight binding of biotin. Molecular-dynamics calculations for one of the mutants suggests that increased fluctuations in a hydrogen bond between the protein and biotin are associated with cleavage of the binding loop. Circular permutation of streptavidin was carried out in order to investigate the role of a main-chain amide in stabilizing the high-affinity complex of the protein and biotin. Mutant proteins CP49/48 and CP50/49 were constructed to place new N-termini at residues 49 and 50 in a flexible loop involved in stabilizing the biotin complex. Crystal structures of the two mutants show that half of each loop closes over the binding site, as observed in wild-type streptavidin, while the other half adopts the open conformation found in the unliganded state. The structures are consistent with kinetic and thermodynamic data and indicate that the loop plays a role in enthalpic stabilization of the bound state via the Asn49 amide–biotin hydrogen bond. In wild-type streptavidin, the entropic penalties of immobilizing a flexible portion of the protein to enhance binding are kept to a manageable level by using a contiguous loop of medium length (six residues) which is already constrained by its anchorage to strands of the β-barrel protein. A molecular-dynamics simulation for CP50/49 shows that cleavage of the binding loop results in increased structural fluctuations for Ser45 and that these fluctuations destabilize the streptavidin–biotin complex

  18. Comparing vector-based and Bayesian memory models using large-scale datasets: User-generated hashtag and tag prediction on Twitter and Stack Overflow.

    Science.gov (United States)

    Stanley, Clayton; Byrne, Michael D

    2016-12-01

    The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Mass textures and wolfenstein parameters from breaking the flavour permutational symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Mondragon, A; Rivera, T. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico,Mexico D.F. (Mexico); Rodriguez Jauregui, E. [Deutsches Elekronen-Synchrotron, Theory Group, Hamburg (Germany)

    2001-12-01

    We will give an overview of recent progress in the phenomenological study of quark mass matrices, quark flavour mixings and CP-violation with emphasis on the possibility of an underlying discrete, flavour permutational symmetry and its breaking, from which realistic models of mass generation could be built. The quark mixing angles and CP-violating phase, as well as the Wolfenstein parameters are given in terms of four quark mass ratios and only two parameters (Z{sup 1}/2, {phi}) characterizing the symmetry breaking pattern. Excellent agreement with all current experimental data is found. [Spanish] Daremos una visita panoramica del progreso reciente en el estudio fenomenologico de las matrices de masas y de mezclas del sabor de los quarks y la violacion de PC, con enfasis en la posibilidad de que, subyacentes al problema, se halle una simetria discreta, permutacional del sabor y su rompimiento a partir de las cuales se puedan construir modelos realistas de la generacion de las masas. Los angulos de mezcla de los quarks y la fase que viola CP, asi como los parametros de Wolfenstein se dan en terminos de cuatro razones de masas de los quarks y solamente dos parametros (Z{sup 1}/2, {phi}) que caracterizan el patron del rompimiento de la simetria. Los resultados se encuentran en excelente acuerdo con todos los datos experimentales mas recientes.

  20. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph

    2010-01-24

    We propose statistical methods for comparing phenomics data generated by the Biolog Phenotype Microarray (PM) platform for high-throughput phenotyping. Instead of the routinely used visual inspection of data with no sound inferential basis, we develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second approach employs functional principal component analysis. Properties of the proposed methods are investigated on both simulated data and data sets from the PM platform.

  1. Efficiency and credit ratings: a permutation-information-theory analysis

    International Nuclear Information System (INIS)

    Bariviera, Aurelio Fernandez; Martinez, Lisana B; Zunino, Luciano; Belén Guercio, M; Rosso, Osvaldo A

    2013-01-01

    The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity–entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification. (paper)

  2. Computational fitness landscape for all gene-order permutations of an RNA virus.

    Directory of Open Access Journals (Sweden)

    Kwang-il Lim

    2009-02-01

    Full Text Available How does the growth of a virus depend on the linear arrangement of genes in its genome? Answering this question may enhance our basic understanding of virus evolution and advance applications of viruses as live attenuated vaccines, gene-therapy vectors, or anti-tumor therapeutics. We used a mathematical model for vesicular stomatitis virus (VSV, a prototype RNA virus that encodes five genes (N-P-M-G-L, to simulate the intracellular growth of all 120 possible gene-order variants. Simulated yields of virus infection varied by 6,000-fold and were found to be most sensitive to gene-order permutations that increased levels of the L gene transcript or reduced levels of the N gene transcript, the lowest and highest expressed genes of the wild-type virus, respectively. Effects of gene order on virus growth also depended upon the host-cell environment, reflecting different resources for protein synthesis and different cell susceptibilities to infection. Moreover, by computationally deleting intergenic attenuations, which define a key mechanism of transcriptional regulation in VSV, the variation in growth associated with the 120 gene-order variants was drastically narrowed from 6,000- to 20-fold, and many variants produced higher progeny yields than wild-type. These results suggest that regulation by intergenic attenuation preceded or co-evolved with the fixation of the wild type gene order in the evolution of VSV. In summary, our models have begun to reveal how gene functions, gene regulation, and genomic organization of viruses interact with their host environments to define processes of viral growth and evolution.

  3. On optimal (non-Trojan) semi-Latin squares with side n and block size n: Construction procedure and admissible permutations

    International Nuclear Information System (INIS)

    Chigbu, P.E.; Ukekwe, E.C.; Ikekeonwu, G.A.M.

    2006-12-01

    There is a special family of the (n x n)/k semi-Latin squares called the Trojan squares which are optimal among semi-Latin squares of equivalent sizes. Unfortunately, Trojan squares do not exist for all k; for instance, there is no Trojan square for k ≥ n. However, the need usually arises for constructing optimal semi-Latin squares where no Trojan squares exist. Bailey made a conjecture on optimal semi-Latin squares for k ≥ n and based on this conjecture, optimal non-Trojan semi-Latin squares are here constructed for k = n, considering the inherent Trojan squares for k < n. A lemma substantiating this conjecture for k = n is given and proved. In addition, the properties for the admissible permutation sets used in constructing these optimal squares are made evident based on the systematic-group-theoretic algorithm of Bailey and Chigbu. Algorithms for identifying the admissible permutations as well as constructing the optimal non-Trojan (n x n)/k = n semi-Latin squares for odd n and n = 4 are given. (author)

  4. Widespread occurrence of organelle genome-encoded 5S rRNAs including permuted molecules.

    Science.gov (United States)

    Valach, Matus; Burger, Gertraud; Gray, Michael W; Lang, B Franz

    2014-12-16

    5S Ribosomal RNA (5S rRNA) is a universal component of ribosomes, and the corresponding gene is easily identified in archaeal, bacterial and nuclear genome sequences. However, organelle gene homologs (rrn5) appear to be absent from most mitochondrial and several chloroplast genomes. Here, we re-examine the distribution of organelle rrn5 by building mitochondrion- and plastid-specific covariance models (CMs) with which we screened organelle genome sequences. We not only recover all organelle rrn5 genes annotated in GenBank records, but also identify more than 50 previously unrecognized homologs in mitochondrial genomes of various stramenopiles, red algae, cryptomonads, malawimonads and apusozoans, and surprisingly, in the apicoplast (highly derived plastid) genomes of the coccidian pathogens Toxoplasma gondii and Eimeria tenella. Comparative modeling of RNA secondary structure reveals that mitochondrial 5S rRNAs from brown algae adopt a permuted triskelion shape that has not been seen elsewhere. Expression of the newly predicted rrn5 genes is confirmed experimentally in 10 instances, based on our own and published RNA-Seq data. This study establishes that particularly mitochondrial 5S rRNA has a much broader taxonomic distribution and a much larger structural variability than previously thought. The newly developed CMs will be made available via the Rfam database and the MFannot organelle genome annotator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. A combinatorial and probabilistic study of initial and end heights of descents in samples of geometrically distributed random variables and in permutations

    Directory of Open Access Journals (Sweden)

    Helmut Prodinger

    2007-01-01

    Full Text Available In words, generated by independent geometrically distributed random variables, we study the l th descent, which is, roughly speaking, the l th occurrence of a neighbouring pair ab with a>b. The value a is called the initial height, and b the end height. We study these two random variables (and some similar ones by combinatorial and probabilistic tools. We find in all instances a generating function Ψ(v,u, where the coefficient of v j u i refers to the j th descent (ascent, and i to the initial (end height. From this, various conclusions can be drawn, in particular expected values. In the probabilistic part, a Markov chain model is used, which allows to get explicit expressions for the heights of the second descent. In principle, one could go further, but the complexity of the results forbids it. This is extended to permutations of a large number of elements. Methods from q-analysis are used to simplify the expressions. This is the reason that we confine ourselves to the geometric distribution only. For general discrete distributions, no such tools are available.

  6. Method for mapping population-based case-control studies: an application using generalized additive models

    Directory of Open Access Journals (Sweden)

    Aschengrau Ann

    2006-06-01

    Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.

  7. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  8. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  9. Permutation Tests of Hierarchical Cluster Analyses of Carrion Communities and Their Potential Use in Forensic Entomology.

    Science.gov (United States)

    van der Ham, Joris L

    2016-05-19

    Forensic entomologists can use carrion communities' ecological succession data to estimate the postmortem interval (PMI). Permutation tests of hierarchical cluster analyses of these data provide a conceptual method to estimate part of the PMI, the post-colonization interval (post-CI). This multivariate approach produces a baseline of statistically distinct clusters that reflect changes in the carrion community composition during the decomposition process. Carrion community samples of unknown post-CIs are compared with these baseline clusters to estimate the post-CI. In this short communication, I use data from previously published studies to demonstrate the conceptual feasibility of this multivariate approach. Analyses of these data produce series of significantly distinct clusters, which represent carrion communities during 1- to 20-day periods of the decomposition process. For 33 carrion community samples, collected over an 11-day period, this approach correctly estimated the post-CI within an average range of 3.1 days. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  11. Quantum tests for the linearity and permutation invariance of Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Hillery, Mark [Department of Physics, Hunter College of the City University of New York, 695 Park Avenue, New York, New York 10021 (United States); Andersson, Erika [SUPA, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom)

    2011-12-15

    The goal in function property testing is to determine whether a black-box Boolean function has a certain property or is {epsilon}-far from having that property. The performance of the algorithm is judged by how many calls need to be made to the black box in order to determine, with high probability, which of the two alternatives is the case. Here we present two quantum algorithms, the first to determine whether the function is linear and the second to determine whether it is symmetric (invariant under permutations of the arguments). Both require order {epsilon}{sup -2/3} calls to the oracle, which is better than known classical algorithms. In addition, in the case of linearity testing, if the function is linear, the quantum algorithm identifies which linear function it is. The linearity test combines the Bernstein-Vazirani algorithm and amplitude amplification, while the test to determine whether a function is symmetric uses projective measurements and amplitude amplification.

  12. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori . Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O'Brien's OLS test, Anderson's permutation [Formula: see text]-test, two permutation F -tests (including GlobalAncova), and a rotation z -test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS distributions suggest that the GLS results in

  13. Index to Nuclear Safety. A technical progress review by chronology, permuted title, and author. Vol. 11, No. 1 through Vol. 15, No. 6

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Klein, A.

    1975-04-01

    This issue of the Index to Nuclear Safety covers only articles included in Nuclear Safety, Vol. 11, No. 1, through Vol. 15, No. 6. This index is presented in three sections as follows: Chronological List of Articles by Volume; Permuted Title (KWIC) Index; and Author Index. (U.S.)

  14. Electronic transport on the spatial structure of the protein: Three-dimensional lattice model

    International Nuclear Information System (INIS)

    Sarmento, R.G.; Frazão, N.F.; Macedo-Filho, A.

    2017-01-01

    Highlights: • The electronic transport on the structure of the three-dimensional lattice model of the protein is studied. • The signing of the current–voltage is directly affected by permutations of the weak bonds in the structure. • Semiconductor behave of the proteins suggest a potential application in the development of novel biosensors. - Abstract: We report a numerical analysis of the electronic transport in protein chain consisting of thirty-six standard amino acids. The protein chains studied have three-dimensional structure, which can present itself in three distinct conformations and the difference consist in the presence or absence of thirteen hydrogen-bondings. Our theoretical method uses an electronic tight-binding Hamiltonian model, appropriate to describe the protein segments modeled by the amino acid chain. We note that the presence and the permutations between weak bonds in the structure of proteins are directly related to the signing of the current–voltage. Furthermore, the electronic transport depends on the effect of temperature. In addition, we have found a semiconductor behave in the models investigated and it suggest a potential application in the development of novel biosensors for molecular diagnostics.

  15. Electronic transport on the spatial structure of the protein: Three-dimensional lattice model

    Energy Technology Data Exchange (ETDEWEB)

    Sarmento, R.G. [Departamento de Ciências Biológicas, Universidade Federal do Piauí, 64800-000 Floriano, PI (Brazil); Frazão, N.F. [Centro de Educação e Saúde, Universidade Federal de Campina Grande, 581750-000 Cuité, PB (Brazil); Macedo-Filho, A., E-mail: amfilho@gmail.com [Campus Prof. Antonio Geovanne Alves de Sousa, Universidade Estadual do Piauí, 64260-000 Piripiri, PI (Brazil)

    2017-01-30

    Highlights: • The electronic transport on the structure of the three-dimensional lattice model of the protein is studied. • The signing of the current–voltage is directly affected by permutations of the weak bonds in the structure. • Semiconductor behave of the proteins suggest a potential application in the development of novel biosensors. - Abstract: We report a numerical analysis of the electronic transport in protein chain consisting of thirty-six standard amino acids. The protein chains studied have three-dimensional structure, which can present itself in three distinct conformations and the difference consist in the presence or absence of thirteen hydrogen-bondings. Our theoretical method uses an electronic tight-binding Hamiltonian model, appropriate to describe the protein segments modeled by the amino acid chain. We note that the presence and the permutations between weak bonds in the structure of proteins are directly related to the signing of the current–voltage. Furthermore, the electronic transport depends on the effect of temperature. In addition, we have found a semiconductor behave in the models investigated and it suggest a potential application in the development of novel biosensors for molecular diagnostics.

  16. The behaviour of random forest permutation-based variable importance measures under predictor correlation.

    Science.gov (United States)

    Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas

    2010-02-27

    Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.

  17. Permutation entropy of finite-length white-noise time series.

    Science.gov (United States)

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  18. Probabilistic methods in combinatorial analysis

    CERN Document Server

    Sachkov, Vladimir N

    2014-01-01

    This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist

  19. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  20. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  1. A simple method for combining genetic mapping data from multiple crosses and experimental designs.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.

  2. A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings

    Science.gov (United States)

    Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.

    2014-03-01

    The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.

  3. Semi-Supervised Generation with Cluster-aware Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Fraccaro, Marco; Winther, Ole

    2017-01-01

    Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Clust...... a log-likelihood of −79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.......Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster...

  4. Lower Bounds in the Asymmetric External Memory Model

    DEFF Research Database (Denmark)

    Jacob, Riko; Sitchinava, Nodari

    2017-01-01

    Motivated by the asymmetric read and write costs of emerging non-volatile memory technologies, we study lower bounds for the problems of sorting, permuting and multiplying a sparse matrix by a dense vector in the asymmetric external memory model (AEM). Given an AEM with internal (symmetric) memory...... of size M, transfers between symmetric and asymmetric memory in blocks of size B and the ratio ω between write and read costs, we show Ω(min (N, ωN/B logω M/B N/B) lower bound for the cost of permuting N input elements. This lower bound also applies to the problem of sorting N elements. This proves...

  5. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author. Vol. 11(1)--Vol. 18(6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W.B.; Klein, A.

    1978-04-11

    This index to Nuclear Safety covers articles published in Nuclear Safety, Vol. 11, No. 1 (January-February 1970), through Vol. 18, No. 6 (November-December 1977). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 450 technical articles published in Nuclear Safety in the last eight years are listed in this index.

  6. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    Science.gov (United States)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  7. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author. Vol. 11(1)--Vol. 18(6)

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Klein, A.

    1978-01-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Vol. 11, No. 1 (January-February 1970), through Vol. 18, No. 6 (November-December 1977). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 450 technical articles published in Nuclear Safety in the last eight years are listed in this index

  8. Statistical Methods for Comparative Phenomics Using High-Throughput Phenotype Microarrays

    KAUST Repository

    Sturino, Joseph; Zorych, Ivan; Mallick, Bani; Pokusaeva, Karina; Chang, Ying-Ying; Carroll, Raymond J; Bliznuyk, Nikolay

    2010-01-01

    develop two approaches. The first approach is based on quantifying the distance between mean or median curves from two treatments and then applying a permutation test; we also consider a permutation test applied to areas under mean curves. The second

  9. Tunneling and Speedup in Quantum Optimization for Permutation-Symmetric Problems

    Directory of Open Access Journals (Sweden)

    Siddharth Muthukrishnan

    2016-07-01

    Full Text Available Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA, especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final cost function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Finally, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.

  10. The Multimorbidity Cluster Analysis Tool: Identifying Combinations and Permutations of Multiple Chronic Diseases Using a Record-Level Computational Analysis

    Directory of Open Access Journals (Sweden)

    Kathryn Nicholson

    2017-12-01

    Full Text Available Introduction: Multimorbidity, or the co-occurrence of multiple chronic health conditions within an individual, is an increasingly dominant presence and burden in modern health care systems.  To fully capture its complexity, further research is needed to uncover the patterns and consequences of these co-occurring health states.  As such, the Multimorbidity Cluster Analysis Tool and the accompanying Multimorbidity Cluster Analysis Toolkit have been created to allow researchers to identify distinct clusters that exist within a sample of participants or patients living with multimorbidity.  Development: The Tool and Toolkit were developed at Western University in London, Ontario, Canada.  This open-access computational program (JAVA code and executable file was developed and tested to support an analysis of thousands of individual records and up to 100 disease diagnoses or categories.  Application: The computational program can be adapted to the methodological elements of a research project, including type of data, type of chronic disease reporting, measurement of multimorbidity, sample size and research setting.  The computational program will identify all existing, and mutually exclusive, combinations and permutations within the dataset.  An application of this computational program is provided as an example, in which more than 75,000 individual records and 20 chronic disease categories resulted in the detection of 10,411 unique combinations and 24,647 unique permutations among female and male patients.  Discussion: The Tool and Toolkit are now available for use by researchers interested in exploring the complexities of multimorbidity.  Its careful use, and the comparison between results, will be valuable additions to the nuanced understanding of multimorbidity.

  11. label.switching: An R Package for Dealing with the Label Switching Problem in MCMC Outputs

    Directory of Open Access Journals (Sweden)

    Panagiotis Papastamoulis

    2016-02-01

    Full Text Available Label switching is a well-known and fundamental problem in Bayesian estimation of mixture or hidden Markov models. In case that the prior distribution of the model parameters is the same for all states, then both the likelihood and posterior distribution are invariant to permutations of the parameters. This property makes Markov chain Monte Carlo (MCMC samples simulated from the posterior distribution non-identifiable. In this paper, the label.switching package is introduced. It contains one probabilistic and seven deterministic relabeling algorithms in order to post-process a given MCMC sample, provided by the user. Each method returns a set of permutations that can be used to reorder the MCMC output. Then, any parametric function of interest can be inferred using the reordered MCMC sample. A set of user-defined permutations is also accepted, allowing the researcher to benchmark new relabeling methods against the available ones.

  12. Symmetry analysis of many-body wave functions, with applications to the nuclear shell model

    International Nuclear Information System (INIS)

    Novoselsky, A.; Katriel, J.

    1995-01-01

    The weights of the different permutational symmetry components of a nonsymmetry-adapted many-particle wave function are evaluated in terms of the expectation values of the symmetric-group class sums. This facilitates the evaluation of the weights without the construction of a complete set of symmetry adapted functions. Subspace projection operators are introduced, to be used when prior knowledge about the symmetry-species composition of a wave function is available. The permutational weight analysis of a recursively angular-momentum coupled (shell model) wave function is presented as an illustration

  13. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author, Volume 18 (1) through Volume 22 (6)

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Passiakos, M.

    1982-06-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Volume 18, Number 1 (January-February 1977) through Volume 22, Number 6 (November-December 1981). The index is divided into three section: a chronological list of articles (including abstracts), a permuted-title (KWIC) index, and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center, covers all safety aspects of nuclear power reactors and associated facilities. Over 300 technical articles published in Nuclear Safety in the last 5 years are listed in this index

  14. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author, Volume 18 (1) through Volume 22 (6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W.B.; Passiakos, M.

    1982-06-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Volume 18, Number 1 (January-February 1977) through Volume 22, Number 6 (November-December 1981). The index is divided into three section: a chronological list of articles (including abstracts), a permuted-title (KWIC) index, and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center, covers all safety aspects of nuclear power reactors and associated facilities. Over 300 technical articles published in Nuclear Safety in the last 5 years are listed in this index.

  15. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    Science.gov (United States)

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    Directory of Open Access Journals (Sweden)

    Ozonoff Al

    2010-07-01

    Full Text Available Abstract Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM

  17. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting.

    Science.gov (United States)

    Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F

    2010-07-19

    A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression

  18. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  19. Monitoring the informational efficiency of European corporate bond markets with dynamical permutation min-entropy

    Science.gov (United States)

    Zunino, Luciano; Bariviera, Aurelio F.; Guercio, M. Belén; Martinez, Lisana B.; Rosso, Osvaldo A.

    2016-08-01

    In this paper the permutation min-entropy has been implemented to unveil the presence of temporal structures in the daily values of European corporate bond indices from April 2001 to August 2015. More precisely, the informational efficiency evolution of the prices of fifteen sectorial indices has been carefully studied by estimating this information-theory-derived symbolic tool over a sliding time window. Such a dynamical analysis makes possible to obtain relevant conclusions about the effect that the 2008 credit crisis has had on the different European corporate bond sectors. It is found that the informational efficiency of some sectors, namely banks, financial services, insurance, and basic resources, has been strongly reduced due to the financial crisis whereas another set of sectors, integrated by chemicals, automobiles, media, energy, construction, industrial goods & services, technology, and telecommunications has only suffered a transitory loss of efficiency. Last but not least, the food & beverage, healthcare, and utilities sectors show a behavior close to a random walk practically along all the period of analysis, confirming a remarkable immunity against the 2008 financial crisis.

  20. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Index to Nuclear Safety: a technical progress review by chrology, permuted title, and author, Volume 11(1) through Volume 20(6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W B; Passiakos, M

    1980-06-01

    This index to Nuclear Safety, a bimonthly technical progress review, covers articles published in Nuclear Safety, Volume II, No. 1 (January-February 1970), through Volume 20, No. 6 (November-December 1979). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 600 technical articles published in Nuclear Safety in the last ten years are listed in this index.

  2. D-term Spectroscopy in Realistic Heterotic-String Models

    CERN Document Server

    Dedes, Athanasios

    2000-01-01

    The emergence of free fermionic string models with solely the MSSM charged spectrum below the string scale provides further evidence to the assertion that the true string vacuum is connected to the Z_2 x Z_2 orbifold in the vicinity of the free fermionic point in the Narain moduli space. An important property of the Z_2 x Z_2 orbifold is the cyclic permutation symmetry between the three twisted sectors. If preserved in the three generations models the cyclic permutation symmetry results in a family universal anomalous U(1)_A, which is instrumental in explaining squark degeneracy, provided that the dominant component of supersymmetry breaking arises from the U(1)_A D-term. Interestingly, the contribution of the family--universal D_A-term to the squark masses may be intra-family non-universal, and may differ from the usual (universal) boundary conditions assumed in the MSSM. We contemplate how D_A--term spectroscopy may be instrumental in studying superstring models irrespective of our ignorance of the details ...

  3. Permutation entropy analysis of heart rate variability for the assessment of cardiovascular autonomic neuropathy in type 1 diabetes mellitus.

    Science.gov (United States)

    Carricarte Naranjo, Claudia; Sanchez-Rodriguez, Lazaro M; Brown Martínez, Marta; Estévez Báez, Mario; Machado García, Andrés

    2017-07-01

    Heart rate variability (HRV) analysis is a relevant tool for the diagnosis of cardiovascular autonomic neuropathy (CAN). To our knowledge, no previous investigation on CAN has assessed the complexity of HRV from an ordinal perspective. Therefore, the aim of this work is to explore the potential of permutation entropy (PE) analysis of HRV complexity for the assessment of CAN. For this purpose, we performed a short-term PE analysis of HRV in healthy subjects and type 1 diabetes mellitus patients, including patients with CAN. Standard HRV indicators were also calculated in the control group. A discriminant analysis was used to select the variables combination with best discriminative power between control and CAN patients groups, as well as for classifying cases. We found that for some specific temporal scales, PE indicators were significantly lower in CAN patients than those calculated for controls. In such cases, there were ordinal patterns with high probabilities of occurrence, while others were hardly found. We posit this behavior occurs due to a decrease of HRV complexity in the diseased system. Discriminant functions based on PE measures or probabilities of occurrence of ordinal patterns provided an average of 75% and 96% classification accuracy. Correlations of PE and HRV measures showed to depend only on temporal scale, regardless of pattern length. PE analysis at some specific temporal scales, seem to provide additional information to that obtained with traditional HRV methods. We concluded that PE analysis of HRV is a promising method for the assessment of CAN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Characterisation of the Effects of Sleep Deprivation on the Electroencephalogram Using Permutation Lempel–Ziv Complexity, a Non-Linear Analysis Tool

    Directory of Open Access Journals (Sweden)

    Pinar Deniz Tosun

    2017-12-01

    Full Text Available Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG. Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS, i.e., wakefulness, rapid-eye movement (REM and non-rapid-eye movement (NREM sleep. However, other aspects such as change of patterns associated with brain dynamics may not be captured unless a non-linear-based analysis method is used. In this pilot study, Permutation Lempel–Ziv complexity (PLZC, a novel symbolic dynamics analysis method, was used to characterise the changes in the EEG in sleep and wakefulness during baseline and recovery from sleep deprivation (SD. The results obtained with PLZC were contrasted with a related non-linear method, Lempel–Ziv complexity (LZC. Both measure the emergence of new patterns. However, LZC is dependent on the absolute amplitude of the EEG, while PLZC is only dependent on the relative amplitude due to symbolisation procedure and thus, more resistant to noise. We showed that PLZC discriminates activated brain states associated with wakefulness and REM sleep, which both displayed higher complexity, compared to NREM sleep. Additionally, significantly lower PLZC values were measured in NREM sleep during the recovery period following SD compared to baseline, suggesting a reduced emergence of new activity patterns in the EEG. These findings were validated using PLZC on surrogate data. By contrast, LZC was merely reflecting changes in the spectral composition of the EEG. Overall, this study implies that PLZC is a robust non-linear complexity measure, which is not dependent on amplitude variations in the signal, and which may be useful to further assess EEG alterations induced by environmental or pharmacological manipulations.

  5. Variance-based sensitivity indices for stochastic models with correlated inputs

    Energy Technology Data Exchange (ETDEWEB)

    Kala, Zdeněk [Brno University of Technology, Faculty of Civil Engineering, Department of Structural Mechanics Veveří St. 95, ZIP 602 00, Brno (Czech Republic)

    2015-03-10

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics.

  6. Variance-based sensitivity indices for stochastic models with correlated inputs

    International Nuclear Information System (INIS)

    Kala, Zdeněk

    2015-01-01

    The goal of this article is the formulation of the principles of one of the possible strategies in implementing correlation between input random variables so as to be usable for algorithm development and the evaluation of Sobol’s sensitivity analysis. With regard to the types of stochastic computational models, which are commonly found in structural mechanics, an algorithm was designed for effective use in conjunction with Monte Carlo methods. Sensitivity indices are evaluated for all possible permutations of the decorrelation procedures for input parameters. The evaluation of Sobol’s sensitivity coefficients is illustrated on an example in which a computational model was used for the analysis of the resistance of a steel bar in tension with statistically dependent input geometric characteristics

  7. Heuristics methods for the flow shop scheduling problem with separated setup times

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2012-06-01

    Full Text Available This paper deals with the permutation flow shop scheduling problem with separated machine setup times. As a result of an investigation on the problem characteristics, four heuristics methods are proposed with procedures of the construction sequencing solution by an analogy with the asymmetric traveling salesman problem with the objective of minimizing makespan. Experimental results show that one of the new heuristics methods proposed provide high quality solutions in comparisons with the evaluated methods considered in the literature.

  8. Comparison of permutationally invariant polynomials, neural networks, and Gaussian approximation potentials in representing water interactions through many-body expansions

    Science.gov (United States)

    Nguyen, Thuong T.; Székely, Eszter; Imbalzano, Giulio; Behler, Jörg; Csányi, Gábor; Ceriotti, Michele; Götz, Andreas W.; Paesani, Francesco

    2018-06-01

    The accurate representation of multidimensional potential energy surfaces is a necessary requirement for realistic computer simulations of molecular systems. The continued increase in computer power accompanied by advances in correlated electronic structure methods nowadays enables routine calculations of accurate interaction energies for small systems, which can then be used as references for the development of analytical potential energy functions (PEFs) rigorously derived from many-body (MB) expansions. Building on the accuracy of the MB-pol many-body PEF, we investigate here the performance of permutationally invariant polynomials (PIPs), neural networks, and Gaussian approximation potentials (GAPs) in representing water two-body and three-body interaction energies, denoting the resulting potentials PIP-MB-pol, Behler-Parrinello neural network-MB-pol, and GAP-MB-pol, respectively. Our analysis shows that all three analytical representations exhibit similar levels of accuracy in reproducing both two-body and three-body reference data as well as interaction energies of small water clusters obtained from calculations carried out at the coupled cluster level of theory, the current gold standard for chemical accuracy. These results demonstrate the synergy between interatomic potentials formulated in terms of a many-body expansion, such as MB-pol, that are physically sound and transferable, and machine-learning techniques that provide a flexible framework to approximate the short-range interaction energy terms.

  9. Generator coordinate method for triaxial quadrupole collective dynamics in strontium isotopes

    International Nuclear Information System (INIS)

    Bonche, P.; Dobaczewski, J.; Flocard, H.; Heenen, P.H.

    1991-01-01

    We discuss the algebraic structure of the generator coordinate method for triaxial quadrupole collective motion. The collective solutions are classified according to the representations of the permutation group of the intrinsic axes. Our method amounts to an approximate angular momentum projection. We apply it to a study of the spherical to deformed shape transition in light even strontium isotopes 78-88 Sr. We find that triaxial configurations play a significant role in explaining the structure of the transitional isotopes 80-82 Sr

  10. Mirror bootstrap method for testing hypotheses of one mean

    OpenAIRE

    Varvak, Anna

    2012-01-01

    The general philosophy for bootstrap or permutation methods for testing hypotheses is to simulate the variation of the test statistic by generating the sampling distribution which assumes both that the null hypothesis is true, and that the data in the sample is somehow representative of the population. This philosophy is inapplicable for testing hypotheses for a single parameter like the population mean, since the two assumptions are contradictory (e.g., how can we assume both that the mean o...

  11. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  12. Forest Disturbance Mapping Using Dense Synthetic Landsat/MODIS Time-Series and Permutation-Based Disturbance Index Detection

    Directory of Open Access Journals (Sweden)

    David Frantz

    2016-03-01

    Full Text Available Spatio-temporal information on process-based forest loss is essential for a wide range of applications. Despite remote sensing being the only feasible means of monitoring forest change at regional or greater scales, there is no retrospectively available remote sensor that meets the demand of monitoring forests with the required spatial detail and guaranteed high temporal frequency. As an alternative, we employed the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM to produce a dense synthetic time series by fusing Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS nadir Bidirectional Reflectance Distribution Function (BRDF adjusted reflectance. Forest loss was detected by applying a multi-temporal disturbance detection approach implementing a Disturbance Index-based detection strategy. The detection thresholds were permutated with random numbers for the normal distribution in order to generate a multi-dimensional threshold confidence area. As a result, a more robust parameterization and a spatially more coherent detection could be achieved. (i The original Landsat time series; (ii synthetic time series; and a (iii combined hybrid approach were used to identify the timing and extent of disturbances. The identified clearings in the Landsat detection were verified using an annual woodland clearing dataset from Queensland’s Statewide Landcover and Trees Study. Disturbances caused by stand-replacing events were successfully identified. The increased temporal resolution of the synthetic time series indicated promising additional information on disturbance timing. The results of the hybrid detection unified the benefits of both approaches, i.e., the spatial quality and general accuracy of the Landsat detection and the increased temporal information of synthetic time series. Results indicated that a temporal improvement in the detection of the disturbance date could be achieved relative to the irregularly spaced Landsat

  13. Graphical Gaussian models with edge and vertex symmetries

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2008-01-01

    We introduce new types of graphical Gaussian models by placing symmetry restrictions on the concentration or correlation matrix. The models can be represented by coloured graphs, where parameters that are associated with edges or vertices of the same colour are restricted to being identical. We...... study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...

  14. Development of a type 2 diabetes risk model from a panel of serum biomarkers from the Inter99 cohort

    DEFF Research Database (Denmark)

    Kolberg, Janice A; Jørgensen, Torben; Gerwien, Robert W

    2009-01-01

    who did not were tested. An ultrasensitive immunoassay was used to measure of 58 candidate biomarkers in multiple diabetes-associated pathways, along with six routine clinical variables. Statistical learning methods and permutation testing were used to select the most informative biomarkers. Risk....... This model has a bootstrap-estimated area under the curve of 0.76, which is greater than that for A1C, fasting plasma glucose, fasting serum insulin, BMI, sex-adjusted waist circumference, a model using fasting glucose and insulin, and a noninvasive clinical model. CONCLUSIONS: A model incorporating six...... circulating biomarkers provides an objective and quantitative estimate of the 5-year risk of developing type 2 diabetes, performs better than single risk indicators and a noninvasive clinical model, and provides better stratification than fasting plasma glucose alone....

  15. The resonating group method three cluster approach to the ground state 9 Li nucleus structure

    International Nuclear Information System (INIS)

    Filippov, G.F.; Pozdnyakov, Yu.A.; Terenetsky, K.O.; Verbitsky, V.P.

    1994-01-01

    The three-cluster approach for light atomic nuclei is formulated in frame of the algebraic version of resonating group method. Overlap integral and Hamiltonian matrix elements on generating functions are obtained for 9 Li nucleus. All permissible by Pauli principle 9 Li different cluster nucleon permutations were taken into account in the calculations. The results obtained can be easily generalised on any three-cluster system up to 12 C. Matrix elements obtained in the work were used in the variational calculations of the ground state energetic and geometric 9 Li characteristics. It is shown that 9 Li ground state is not adequate to the shell model limit and has pronounced three-cluster structure. (author). 16 refs., 4 tab., 2 figs

  16. The minimal extension of the Standard Model with S3 symmetry

    International Nuclear Information System (INIS)

    Lee, C.E.; Lin, C.; Yang, Y.W.

    1991-01-01

    In this paper the two Higgs-doublet extension of the standard electroweak model with S 3 symmetry is presented. The flavour changing neutral Higgs interaction are automatically absent. A permutation symmetry breaking scheme is discussed. The correction to the Bjorken's approximation and the CP-violation factor J are given within this scheme

  17. Models and methods in thermoluminescence

    International Nuclear Information System (INIS)

    Furetta, C.

    2005-01-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  18. Models and methods in thermoluminescence

    Energy Technology Data Exchange (ETDEWEB)

    Furetta, C. [ICN, UNAM, A.P. 70-543, Mexico D.F. (Mexico)

    2005-07-01

    This work contains a conference that was treated about the principles of the luminescence phenomena, the mathematical treatment concerning the thermoluminescent emission of light as well as the Randall-Wilkins model, the Garlick-Gibson model, the Adirovitch model, the May-Partridge model, the Braunlich-Scharman model, the mixed first and second order kinetics, the methods for evaluating the kinetics parameters such as the initial rise method, the various heating rates method, the isothermal decay method and those methods based on the analysis of the glow curve shape. (Author)

  19. Cervical spine motion in manual versus Jackson table turning methods in a cadaveric global instability model.

    Science.gov (United States)

    DiPaola, Matthew J; DiPaola, Christian P; Conrad, Bryan P; Horodyski, MaryBeth; Del Rossi, Gianluca; Sawers, Andrew; Bloch, David; Rechtine, Glenn R

    2008-06-01

    A study of spine biomechanics in a cadaver model. To quantify motion in multiple axes created by transfer methods from stretcher to operating table in the prone position in a cervical global instability model. Patients with an unstable cervical spine remain at high risk for further secondary injury until their spine is adequately surgically stabilized. Previous studies have revealed that collars have significant, but limited benefit in preventing cervical motion when manually transferring patients. The literature proposes multiple methods of patient transfer, although no one method has been universally adopted. To date, no study has effectively evaluated the relationship between spine motion and various patient transfer methods to an operating room table for prone positioning. A global instability was surgically created at C5-6 in 4 fresh cadavers with no history of spine pathology. All cadavers were tested both with and without a rigid cervical collar in the intact and unstable state. Three headrest permutations were evaluated Mayfield (SM USA Inc), Prone View (Dupaco, Oceanside, CA), and Foam Pillow (OSI, Union City, CA). A trained group of medical staff performed each of 2 transfer methods: the "manual" and the "Jackson table" transfer. The manual technique entailed performing a standard rotation of the supine patient on a stretcher to the prone position on the operating room table with in-line manual cervical stabilization. The "Jackson" technique involved sliding the supine patient to the Jackson table (OSI, Union City, CA) with manual in-line cervical stabilization, securing them to the table, then initiating the table's lock and turn mechanism and rotating them into a prone position. An electromagnetic tracking device captured angular motion between the C5 and C6 vertebral segments. Repeated measures statistical analysis was performed to evaluate the following conditions: collar use (2 levels), headrest (3 levels), and turning technique (2 levels). For all

  20. Coexpression of Human α- and Circularly Permuted β-Globins Yields a Hemoglobin with Normal R State but Modified T State Properties†

    Science.gov (United States)

    Asmundson, Anna L.; Taber, Alexandria M.; van der Walde, Adella; Lin, Danielle H.; Olson, John S.; Anthony-Cahill, Spencer J.

    2009-01-01

    For the first time, a circularly permuted human β-globin (cpβ) has been coexpressed with human α-globin in bacterial cells and shown to associate to form α-cpβ hemoglobin in solution. Flash photolysis studies of α-cpβ show markedly biphasic CO and O2 kinetics with the amplitudes for the fast association phases being dominant due the presence of large amounts of high-affinity liganded hemoglobin dimers. Extensive dimerization of liganded but not deoxygenated α-cpβ was observed by gel chromatography. The rate constants for O2 and CO binding to the R state forms of α-cpβ are almost identical to those of native HbA (k′R(CO) ≈ 5.0 μM−1 s−1; k′R(O2) ≈ 50 μM−1 s−1), and the rate of O2 dissociation from fully oxygenated α-cpβ is also very similar to that observed for HbA (kR(O2) ≈ 21–28 s−1). When the equilibrium deoxyHb form of α-cpβ is reacted with CO in rapid mixing experiments, the observed time courses are monophasic and the observed bimolecular association rate constant is ∼1.0 μM−1 s−1, which is intermediate between the R state rate measured in partial photolysis experiments (∼5 μM−1 s−1) and that observed for T state deoxyHbA (k′T(CO) ≈ 0.1 to 0.2 μM−1 s−1). Thus the deoxygenated permutated β subunits generate an intermediate, higher affinity, deoxyHb quaternary state. This conclusion is supported by equilibrium oxygen binding measurements in which α-cpβ exhibits a P50 of ∼1.5 mmHg and a low n-value (∼1.3) at pH 7, 20 °C, compared to 8.5 mmHg and n ≈ 2.8 for native HbA under identical, dilute conditions. PMID:19397368

  1. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  2. A Novel Technique for Steganography Method Based on Improved Genetic Algorithm Optimization in Spatial Domain

    Directory of Open Access Journals (Sweden)

    M. Soleimanpour-moghadam

    2013-06-01

    Full Text Available This paper devotes itself to the study of secret message delivery using cover image and introduces a novel steganographic technique based on genetic algorithm to find a near-optimum structure for the pair-wise least-significant-bit (LSB matching scheme. A survey of the related literatures shows that the LSB matching method developed by Mielikainen, employs a binary function to reduce the number of changes of LSB values. This method verifiably reduces the probability of detection and also improves the visual quality of stego images. So, our proposal draws on the Mielikainen's technique to present an enhanced dual-state scoring model, structured upon genetic algorithm which assesses the performance of different orders for LSB matching and searches for a near-optimum solution among all the permutation orders. Experimental results confirm superiority of the new approach compared to the Mielikainen’s pair-wise LSB matching scheme.

  3. Permutation-symmetric three-particle hyper-spherical harmonics based on the S3⊗SO(3rot⊂O(2⊗SO(3rot⊂U(3⋊S2⊂O(6 subgroup chain

    Directory of Open Access Journals (Sweden)

    Igor Salom

    2017-07-01

    Full Text Available We construct the three-body permutation symmetric hyperspherical harmonics to be used in the non-relativistic three-body Schrödinger equation in three spatial dimensions (3D. We label the state vectors according to the S3⊗SO(3rot⊂O(2⊗SO(3rot⊂U(3⋊S2⊂O(6 subgroup chain, where S3 is the three-body permutation group and S2 is its two element subgroup containing transposition of first two particles, O(2 is the “democracy transformation”, or “kinematic rotation” group for three particles; SO(3rot is the 3D rotation group, and U(3,O(6 are the usual Lie groups. We discuss the good quantum numbers implied by the above chain of algebras, as well as their relation to the S3 permutation properties of the harmonics, particularly in view of the SO(3rot⊂SU(3 degeneracy. We provide a definite, practically implementable algorithm for the calculation of harmonics with arbitrary finite integer values of the hyper angular momentum K, and show an explicit example of this construction in a specific case with degeneracy, as well as tables of K≤6 harmonics. All harmonics are expressed as homogeneous polynomials in the Jacobi vectors (λ,ρ with coefficients given as algebraic numbers unless the “operator method” is chosen for the lifting of the SO(3rot⊂SU(3 multiplicity and the dimension of the degenerate subspace is greater than four – in which case one must resort to numerical diagonalization; the latter condition is not met by any K≤15 harmonic, or by any L≤7 harmonic with arbitrary K. We also calculate a certain type of matrix elements (the Gaunt integrals of products of three harmonics in two ways: 1 by explicit evaluation of integrals and 2 by reduction to known SU(3 Clebsch–Gordan coefficients. In this way we complete the calculation of the ingredients sufficient for the solution to the quantum-mechanical three-body bound state problem.

  4. A cellular automata model of bone formation.

    Science.gov (United States)

    Van Scoy, Gabrielle K; George, Estee L; Opoku Asantewaa, Flora; Kerns, Lucy; Saunders, Marnie M; Prieto-Langarica, Alicia

    2017-04-01

    Bone remodeling is an elegantly orchestrated process by which osteocytes, osteoblasts and osteoclasts function as a syncytium to maintain or modify bone. On the microscopic level, bone consists of cells that create, destroy and monitor the bone matrix. These cells interact in a coordinated manner to maintain a tightly regulated homeostasis. It is this regulation that is responsible for the observed increase in bone gain in the dominant arm of a tennis player and the observed increase in bone loss associated with spaceflight and osteoporosis. The manner in which these cells interact to bring about a change in bone quality and quantity has yet to be fully elucidated. But efforts to understand the multicellular complexity can ultimately lead to eradication of metabolic bone diseases such as osteoporosis and improved implant longevity. Experimentally validated mathematical models that simulate functional activity and offer eventual predictive capabilities offer tremendous potential in understanding multicellular bone remodeling. Here we undertake the initial challenge to develop a mathematical model of bone formation validated with in vitro data obtained from osteoblastic bone cells induced to mineralize and quantified at 26 days of culture. A cellular automata model was constructed to simulate the in vitro characterization. Permutation tests were performed to compare the distribution of the mineralization in the cultures and the distribution of the mineralization in the mathematical models. The results of the permutation test show the distribution of mineralization from the characterization and mathematical model come from the same probability distribution, therefore validating the cellular automata model. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Use of different marker pre-selection methods based on single SNP regression in the estimation of Genomic-EBVs

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-01-01

    Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.

  6. Revisiting the NEH algorithm- the power of job insertion technique for optimizing the makespan in permutation flow shop scheduling

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2016-04-01

    Full Text Available Permutation flow shop scheduling problems have been an interesting area of research for over six decades. Out of the several parameters, minimization of makespan has been studied much over the years. The problems are widely regarded as NP-Complete if the number of machines is more than three. As the computation time grows exponentially with respect to the problem size, heuristics and meta-heuristics have been proposed by many authors that give reasonably accurate and acceptable results. The NEH algorithm proposed in 1983 is still considered as one of the best simple, constructive heuristics for the minimization of makespan. This paper analyses the powerful job insertion technique used by NEH algorithm and proposes seven new variants, the complexity level remains same. 120 numbers of problem instances proposed by Taillard have been used for the purpose of validating the algorithms. Out of the seven, three produce better results than the original NEH algorithm.

  7. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  8. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  9. Preon representations and composite models

    International Nuclear Information System (INIS)

    Kang, Kyungsik

    1982-01-01

    This is a brief report on the preon models which are investigated by In-Gyu Koh, A. N. Schellekens and myself and based on complex, anomaly-free and asymptotically free representations of SU(3) to SU(8), SO(4N+2) and E 6 with no more than two different preons. Complete list of the representations that are complex anomaly-free and asymptotically free has been given by E. Eichten, I.-G. Koh and myself. The assumptions made about the ground state composites and the role of Fermi statistics to determine the metaflavor wave functions are discussed in some detail. We explain the method of decompositions of tensor products with definite permutation properties which has been developed for this purpose by I.-G. Koh, A.N. Schellekens and myself. An example based on an anomaly-free representation of the confining metacolor group SU(5) is discussed

  10. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  11. A statistical method (cross-validation) for bone loss region detection after spaceflight

    Science.gov (United States)

    Zhao, Qian; Li, Wenjun; Li, Caixia; Chu, Philip W.; Kornak, John; Lang, Thomas F.

    2010-01-01

    Astronauts experience bone loss after the long spaceflight missions. Identifying specific regions that undergo the greatest losses (e.g. the proximal femur) could reveal information about the processes of bone loss in disuse and disease. Methods for detecting such regions, however, remains an open problem. This paper focuses on statistical methods to detect such regions. We perform statistical parametric mapping to get t-maps of changes in images, and propose a new cross-validation method to select an optimum suprathreshold for forming clusters of pixels. Once these candidate clusters are formed, we use permutation testing of longitudinal labels to derive significant changes. PMID:20632144

  12. Review of various dynamic modeling methods and development of an intuitive modeling method for dynamic systems

    International Nuclear Information System (INIS)

    Shin, Seung Ki; Seong, Poong Hyun

    2008-01-01

    Conventional static reliability analysis methods are inadequate for modeling dynamic interactions between components of a system. Various techniques such as dynamic fault tree, dynamic Bayesian networks, and dynamic reliability block diagrams have been proposed for modeling dynamic systems based on improvement of the conventional modeling methods. In this paper, we review these methods briefly and introduce dynamic nodes to the existing Reliability Graph with General Gates (RGGG) as an intuitive modeling method to model dynamic systems. For a quantitative analysis, we use a discrete-time method to convert an RGGG to an equivalent Bayesian network and develop a software tool for generation of probability tables

  13. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  14. Simple model for multiple-choice collective decision making.

    Science.gov (United States)

    Lee, Ching Hua; Lucas, Andrew

    2014-11-01

    We describe a simple model of heterogeneous, interacting agents making decisions between n≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E. We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.

  15. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  16. Credit market Jitters in the course of the financial crisis: A permutation entropy approach in measuring informational efficiency in financial assets

    Science.gov (United States)

    Siokis, Fotios M.

    2018-06-01

    We explore the evolution of the informational efficiency for specific instruments of the U.S. money, bond and stock exchange markets, prior and after the outbreak of the Great Recession. We utilize the permutation entropy and the complexity-entropy causality plane to rank the time series and measure the degree of informational efficiency. We find that after the credit crunch and the collapse of Lehman Brothers the efficiency level of specific money market instruments' yield falls considerably. This is an evidence of less uncertainty included in predicting the related yields throughout the financial disarray. Similar trend is depicted in the indices of the stock exchange markets but efficiency remains in much higher levels. On the other hand, bond market instruments maintained their efficiency levels even after the outbreak of the crisis, which could be interpreted into greater randomness and less predictability of their yields.

  17. Permutation on hybrid natural inflation

    Science.gov (United States)

    Carone, Christopher D.; Erlich, Joshua; Ramos, Raymundo; Sher, Marc

    2014-09-01

    We analyze a model of hybrid natural inflation based on the smallest non-Abelian discrete group S3. Leading invariant terms in the scalar potential have an accidental global symmetry that is spontaneously broken, providing a pseudo-Goldstone boson that is identified as the inflaton. The S3 symmetry restricts both the form of the inflaton potential and the couplings of the inflaton field to the waterfall fields responsible for the end of inflation. We identify viable points in the model parameter space. Although the power in tensor modes is small in most of the parameter space of the model, we identify parameter choices that yield potentially observable values of r without super-Planckian initial values of the inflaton field.

  18. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  19. PsiQuaSP-A library for efficient computation of symmetric open quantum systems.

    Science.gov (United States)

    Gegg, Michael; Richter, Marten

    2017-11-24

    In a recent publication we showed that permutation symmetry reduces the numerical complexity of Lindblad quantum master equations for identical multi-level systems from exponential to polynomial scaling. This is important for open system dynamics including realistic system bath interactions and dephasing in, for instance, the Dicke model, multi-Λ system setups etc. Here we present an object-oriented C++ library that allows to setup and solve arbitrary quantum optical Lindblad master equations, especially those that are permutationally symmetric in the multi-level systems. PsiQuaSP (Permutation symmetry for identical Quantum Systems Package) uses the PETSc package for sparse linear algebra methods and differential equations as basis. The aim of PsiQuaSP is to provide flexible, storage efficient and scalable code while being as user friendly as possible. It is easily applied to many quantum optical or quantum information systems with more than one multi-level system. We first review the basics of the permutation symmetry for multi-level systems in quantum master equations. The application of PsiQuaSP to quantum dynamical problems is illustrated with several typical, simple examples of open quantum optical systems.

  20. ADOxx Modelling Method Conceptualization Environment

    Directory of Open Access Journals (Sweden)

    Nesat Efendioglu

    2017-04-01

    Full Text Available The importance of Modelling Methods Engineering is equally rising with the importance of domain specific languages (DSL and individual modelling approaches. In order to capture the relevant semantic primitives for a particular domain, it is necessary to involve both, (a domain experts, who identify relevant concepts as well as (b method engineers who compose a valid and applicable modelling approach. This process consists of a conceptual design of formal or semi-formal of modelling method as well as a reliable, migratable, maintainable and user friendly software development of the resulting modelling tool. Modelling Method Engineering cycle is often under-estimated as both the conceptual architecture requires formal verification and the tool implementation requires practical usability, hence we propose a guideline and corresponding tools to support actors with different background along this complex engineering process. Based on practical experience in business, more than twenty research projects within the EU frame programmes and a number of bilateral research initiatives, this paper introduces the phases, corresponding a toolbox and lessons learned with the aim to support the engineering of a modelling method. ”The proposed approach is illustrated and validated within use cases from three different EU-funded research projects in the fields of (1 Industry 4.0, (2 e-learning and (3 cloud computing. The paper discusses the approach, the evaluation results and derived outlooks.

  1. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  2. Origami rules for the construction of localized eigenstates of the Hubbard model in decorated lattices

    Science.gov (United States)

    Dias, R. G.; Gouveia, J. D.

    2015-11-01

    We present a method of construction of exact localized many-body eigenstates of the Hubbard model in decorated lattices, both for U = 0 and U → ∞. These states are localized in what concerns both hole and particle movement. The starting point of the method is the construction of a plaquette or a set of plaquettes with a higher symmetry than that of the whole lattice. Using a simple set of rules, the tight-binding localized state in such a plaquette can be divided, folded and unfolded to new plaquette geometries. This set of rules is also valid for the construction of a localized state for one hole in the U → ∞ limit of the same plaquette, assuming a spin configuration which is a uniform linear combination of all possible permutations of the set of spins in the plaquette.

  3. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  4. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  5. A Global Network Alignment Method Using Discrete Particle Swarm Optimization.

    Science.gov (United States)

    Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia

    2016-10-19

    Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.

  6. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  7. IAEA CRP on HTGR Uncertainties in Modeling: Assessment of Phase I Lattice to Core Model Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Rouxelin, Pascal Nicolas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented by the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise

  8. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  9. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  10. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  11. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  12. State recognition of the viscoelastic sandwich structure based on the adaptive redundant second generation wavelet packet transform, permutation entropy and the wavelet support vector machine

    International Nuclear Information System (INIS)

    Qu, Jinxiu; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Sun, Chuang; Li, Bing; Wen, Jinpeng

    2014-01-01

    The viscoelastic sandwich structure is widely used in mechanical equipment, yet the structure always suffers from damage during long-term service. Therefore, state recognition of the viscoelastic sandwich structure is very necessary for monitoring structural health states and keeping the equipment running with high reliability. Through the analysis of vibration response signals, this paper presents a novel method for this task based on the adaptive redundant second generation wavelet packet transform (ARSGWPT), permutation entropy (PE) and the wavelet support vector machine (WSVM). In order to tackle the non-linearity existing in the structure vibration response, the PE is introduced to reveal the state changes of the structure. In the case of complex non-stationary vibration response signals, in order to obtain more effective information regarding the structural health states, the ARSGWPT, which can adaptively match the characteristics of a given signal, is proposed to process the vibration response signals, and then multiple PE features are extracted from the resultant wavelet packet coefficients. The WSVM, which can benefit from the conventional SVM as well as wavelet theory, is applied to classify the various structural states automatically. In this study, to achieve accurate and automated state recognition, the ARSGWPT, PE and WSVM are combined for signal processing, feature extraction and state classification, respectively. To demonstrate the effectiveness of the proposed method, a typical viscoelastic sandwich structure is designed, and the different degrees of preload on the structure are used to characterize the various looseness states. The test results show that the proposed method can reliably recognize the different looseness states of the viscoelastic sandwich structure, and the WSVM can achieve a better classification performance than the conventional SVM. Moreover, the superiority of the proposed ARSGWPT in processing the complex vibration response

  13. Partly Duffing Oscillator Stochastic Resonance Method and Its Application on Mechanical Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian Dang

    2016-01-01

    Full Text Available Due to the fact that the slight fault signals in early failure of mechanical system are usually submerged in heavy background noise, it is unfeasible to extract the weak fault feature via the traditional vibration analysis. Stochastic resonance (SR, as a method of utilizing noise to amplify weak signals in nonlinear dynamical systems, can detect weak signals overwhelmed in the noise. However, based on the analysis of the impact of noise intensity on SR effect, it is concluded that the detection results are dramatically limited by the noise intensity of measured signals, especially for incipient fault feature of mechanical system with poor working environment. Therefore, this paper proposes a partly Duffing oscillator SR method to extract the fault feature of mechanical system. In this method, to locate the appearance of weak fault feature and decrease noise intensity, the permutation entropy index is constructed to select the measured signals for the input of Duffing oscillator system. Then, according to the regulation of system parameters, a reasonable match between the selected signals and Duffing oscillator model is achieved to produce a SR phenomenon and realize the fault diagnosis of mechanical system. Experiment results demonstrate that the proposed method achieves a better effect on the fault diagnosis of mechanical system.

  14. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings.

    Science.gov (United States)

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-05-18

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features' information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  15. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    Directory of Open Access Journals (Sweden)

    Jie Liu

    2017-05-01

    Full Text Available The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD. Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components.

  16. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Color Image Encryption Using Three-Dimensional Sine ICMIC Modulation Map and DNA Sequence Operations

    Science.gov (United States)

    Liu, Wenhao; Sun, Kehui; He, Yi; Yu, Mengyao

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a three-dimensional hyperchaotic Sine ICMIC modulation map (3D-SIMM) is proposed based on a close-loop modulation coupling (CMC) method. Based on this map, a novel color image encryption algorithm is designed by employing a hybrid model of multidirectional circular permutation and deoxyribonucleic acid (DNA) masking. In this scheme, the pixel positions of image are scrambled by multidirectional circular permutation, and the pixel values are substituted by DNA sequence operations. The simulation results and security analysis show that the algorithm has good encryption effect and strong key sensitivity, and can resist brute-force, statistical, differential, known-plaintext and chosen-plaintext attacks.

  18. Latin Squares

    Indian Academy of Sciences (India)

    Admin

    2012-09-07

    Sep 7, 2012 ... must first talk of permutations and Latin squares. A permutation of a finite set of objects is a linear arrange- ment of ... with a special element 1 ... Of course, this has .... tion method to disprove Euler's conjecture for infinitely.

  19. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  20. Engineering method to build the composite structure ply database

    Directory of Open Access Journals (Sweden)

    Qinghua Shi

    Full Text Available In this paper, a new method to build a composite ply database with engineering design constraints is proposed. This method has two levels: the core stacking sequence design and the whole stacking sequence design. The core stacking sequences are obtained by the full permutation algorithm considering the ply ratio requirement and the dispersion character which characterizes the dispersion of ply angles. The whole stacking sequences are the combinations of the core stacking sequences. By excluding the ply sequences which do not meet the engineering requirements, the final ply database is obtained. One example with the constraints that the total layer number is 100 and the ply ratio is 30:60:10 is presented to validate the method. This method provides a new way to set up the ply database based on the engineering requirements without adopting intelligent optimization algorithms. Keywords: Composite ply database, VBA program, Structure design, Stacking sequence

  1. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  2. Derivation of some formulae in combinatrics by heuristic methods

    Science.gov (United States)

    Kobayashi, Yukio

    2015-04-01

    Heuristic methods are more effective for students inlearning permutations and combinations in mathematics than passive learning such as rote memorization of formulae. Two examples, n! and 2n - 1Cn, of finding new combinatorial formulae are discussed from a pedagogical standpoint. First, the factorial of n can be expressed as ∑n - 1k = 0k . k!, which can be found by a heuristic method. This expression is comparable to representations of powers of r using geometrical series. Second, the number of possible combinations with repetition of n drawings from n elements is denoted 2n - 1Cn, which can be calculated from ∑n - 1k = 0nCk + 1n - 1Ck. The relation ∑n - 1k = 0nCk + 1n - 1Ck = 2n - 1Cn can be found by a heuristic method through a corresponding problem on mapping.

  3. Generalization of a result of Matsuo and Cherednik to the Calogero-Sutherland-Moser integrable models with exchange terms

    International Nuclear Information System (INIS)

    Quesne, C.

    1996-01-01

    Matsuo and Cherednik have proved that wave functions for the Calogero integrable system can be obtained from some solutions of the Knizhnik-Zamolodchikov (KZ) equations, which appeared first in conformal field theory. Now it is shown that some solutions of the generalized KZ equations can be employed to construct wave functions, characterized by any given permutational symmetry, for some Caloger-Sutherland-Moser integrable models with exchange terms. Such models include the spin generalizations of the original Calogero and Sutherland models as well as the model with δ-function interaction. 18 refs

  4. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  5. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  6. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  7. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  8. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  9. Comparison of the Performance of Two Advanced Spectral Methods for the Analysis of Times Series in Paleoceanography

    Directory of Open Access Journals (Sweden)

    Eulogio Pardo-Igúzquiza

    2015-08-01

    Full Text Available Many studies have revealed the cyclicity of past ocean/atmosphere dynamics at a wide range of time scales (from decadal to millennial time scales, based on the spectral analysis of time series of climate proxies obtained from deep sea sediment cores. Among the many techniques available for spectral analysis, the maximum entropy method and the Thomson multitaper approach have frequently been used because of their good statistical properties and high resolution with short time series. The novelty of the present study is that we compared the two methods by according to the performance of their statistical tests to assess the statistical significance of their power spectrum estimates. The statistical significance of maximum entropy estimates was assessed by a random permutation test (Pardo-Igúzquiza and Rodríguez-Tovar, 2000, while the statistical significance of the Thomson multitaper method was assessed by an F-test (Thomson, 1982. We compared the results obtained in a case study using simulated data where the spectral content of the time series was known and in a case study with real data. In both cases the results are similar: while the cycles identified as significant by maximum entropy and the permutation test have a clear physical interpretation, the F-test with the Thomson multitaper estimator tends to find as no significant the peaks in the low frequencies and tends to give as significant more spurious peaks in the middle and high frequencies. Nevertheless, the best strategy is to use both techniques and to use the advantages of each of them.

  10. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  11. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  12. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  13. Performance Optimization of Tensor Contraction Expressions for Many Body Methods in Quantum Chemistry

    International Nuclear Information System (INIS)

    Hartono, Albert; Lu, Qingda; Henretty, Thomas; Krishnamoorthy, Sriram; Zhang, Huaijian; Baumgartner, Gerald; Bernholdt, David E.; Nooijen, Marcel; Pitzer, Russell M.; Ramanujam, J.; Sadayappan, Ponnuswamy

    2009-01-01

    Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. This paper addresses two complementary aspects of performance optimization of such tensor contraction expressions. Transformations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evaluation of these expressions. The identification of common subexpressions among a set of tensor contraction expressions can result in a reduction of the total number of operations required to evaluate the tensor contractions. The first part of the paper describes an effective algorithm for operation minimization with common subexpression identification and demonstrates its effectiveness on tensor contraction expressions for coupled cluster equations. The second part of the paper highlights the importance of data layout transformation in the optimization of tensor contraction computations on modern processors. A number of considerations such as minimization of cache misses and utilization of multimedia vector instructions are discussed. A library for efficient index permutation of multi-dimensional tensors is described and experimental performance data is provided that demonstrates its effectiveness.

  14. Performance Optimization of Tensor Contraction Expressions for Many Body Methods in Quantum Chemistry

    International Nuclear Information System (INIS)

    Krishnamoorthy, Sriram; Bernholdt, David E.; Pitzer, R.M.; Sadayappan, Ponnuswamy

    2009-01-01

    Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. This paper addresses two complementary aspects of performance optimization of such tensor contraction expressions. Transformations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evaluation of these expressions. The identification of common subexpressions among a set of tensor contraction expressions can result in a reduction of the total number of operations required to evaluate the tensor contractions. The first part of the paper describes an effective algorithm for operation minimization with common subexpression identification and demonstrates its effectiveness on tensor contraction expressions for coupled cluster equations. The second part of the paper highlights the importance of data layout transformation in the optimization of tensor contraction computations on modern processors. A number of considerations, such as minimization of cache misses and utilization of multimedia vector instructions, are discussed. A library for efficient index permutation of multidimensional tensors is described, and experimental performance data is provided that demonstrates its effectiveness.

  15. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  16. Cipher image damage and decisions in real time

    Science.gov (United States)

    Silva-García, Victor Manuel; Flores-Carapia, Rolando; Rentería-Márquez, Carlos; Luna-Benoso, Benjamín; Jiménez-Vázquez, Cesar Antonio; González-Ramírez, Marlon David

    2015-01-01

    This paper proposes a method for constructing permutations on m position arrangements. Our objective is to encrypt color images using advanced encryption standard (AES), using variable permutations means a different one for each 128-bit block in the first round after the x-or operation is applied. Furthermore, this research offers the possibility of knowing the original image when the encrypted figure suffered a failure from either an attack or not. This is achieved by permuting the original image pixel positions before being encrypted with AES variable permutations, which means building a pseudorandom permutation of 250,000 position arrays or more. To this end, an algorithm that defines a bijective function between the nonnegative integer and permutation sets is built. From this algorithm, the way to build permutations on the 0,1,…,m-1 array, knowing m-1 constants, is presented. The transcendental numbers are used to select these m-1 constants in a pseudorandom way. The quality of the proposed encryption according to the following criteria is evaluated: the correlation coefficient, the entropy, and the discrete Fourier transform. A goodness-of-fit test for each basic color image is proposed to measure the bits randomness degree of the encrypted figure. On the other hand, cipher images are obtained in a loss-less encryption way, i.e., no JPEG file formats are used.

  17. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  18. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  19. Are molecular haplotypes worth the time and expense? A cost-effective method for applying molecular haplotypes.

    Directory of Open Access Journals (Sweden)

    Mark A Levenstien

    2006-08-01

    Full Text Available Because current molecular haplotyping methods are expensive and not amenable to automation, many researchers rely on statistical methods to infer haplotype pairs from multilocus genotypes, and subsequently treat these inferred haplotype pairs as observations. These procedures are prone to haplotype misclassification. We examine the effect of these misclassification errors on the false-positive rate and power for two association tests. These tests include the standard likelihood ratio test (LRTstd and a likelihood ratio test that employs a double-sampling approach to allow for the misclassification inherent in the haplotype inference procedure (LRTae. We aim to determine the cost-benefit relationship of increasing the proportion of individuals with molecular haplotype measurements in addition to genotypes to raise the power gain of the LRTae over the LRTstd. This analysis should provide a guideline for determining the minimum number of molecular haplotypes required for desired power. Our simulations under the null hypothesis of equal haplotype frequencies in cases and controls indicate that (1 for each statistic, permutation methods maintain the correct type I error; (2 specific multilocus genotypes that are misclassified as the incorrect haplotype pair are consistently misclassified throughout each entire dataset; and (3 our simulations under the alternative hypothesis showed a significant power gain for the LRTae over the LRTstd for a subset of the parameter settings. Permutation methods should be used exclusively to determine significance for each statistic. For fixed cost, the power gain of the LRTae over the LRTstd varied depending on the relative costs of genotyping, molecular haplotyping, and phenotyping. The LRTae showed the greatest benefit over the LRTstd when the cost of phenotyping was very high relative to the cost of genotyping. This situation is likely to occur in a replication study as opposed to a whole-genome association study.

  20. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  1. TRAC methods and models

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Bott, T.F.

    1981-01-01

    The numerical methods and physical models used in the Transient Reactor Analysis Code (TRAC) versions PD2 and PF1 are discussed. Particular emphasis is placed on TRAC-PF1, the version specifically designed to analyze small-break loss-of-coolant accidents

  2. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  3. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  4. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  5. Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.

    Science.gov (United States)

    Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok

    2018-06-01

    To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.

  6. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...... and PVM methods, in a presented Product Requirement Development model some of the individual drawbacks of each method could be overcome. Based on the UML standard, the model enables the representation of complex hierarchical relationships in a generic product model. At the same time it uses matrix....... Updated design requirements have then to be made explicit and mapped against the existing product architecture. In this paper, existing methods are adapted and extended through linking updated requirements to suitable product models. By combining several established modeling techniques, such as the DSM...

  7. Coherence method of identifying signal noise model

    International Nuclear Information System (INIS)

    Vavrin, J.

    1981-01-01

    The noise analysis method is discussed in identifying perturbance models and their parameters by a stochastic analysis of the noise model of variables measured on a reactor. The analysis of correlations is made in the frequency region using coherence analysis methods. In identifying an actual specific perturbance, its model should be determined and recognized in a compound model of the perturbance system using the results of observation. The determination of the optimum estimate of the perturbance system model is based on estimates of related spectral densities which are determined from the spectral density matrix of the measured variables. Partial and multiple coherence, partial transfers, the power spectral densities of the input and output variables of the noise model are determined from the related spectral densities. The possibilities of applying the coherence identification methods were tested on a simple case of a simulated stochastic system. Good agreement was found of the initial analytic frequency filters and the transfers identified. (B.S.)

  8. Iterative method for Amado's model

    International Nuclear Information System (INIS)

    Tomio, L.

    1980-01-01

    A recently proposed iterative method for solving scattering integral equations is applied to the spin doublet and spin quartet neutron-deuteron scattering in the Amado model. The method is tested numerically in the calculation of scattering lengths and phase-shifts and results are found better than those obtained by using the conventional Pade technique. (Author) [pt

  9. Model for particle masses, flavor mixing, and CP violation, based on spontaneously broken discrete chiral symmetry as the origin of families

    International Nuclear Information System (INIS)

    Adler, S.L.

    1999-01-01

    We construct extensions of the standard model based on the hypothesis that Higgs bosons also exhibit a family structure and that the flavor weak eigenstates in the three families are distinguished by a discrete Z 6 chiral symmetry that is spontaneously broken by the Higgs sector. We study in detail at the tree level models with three Higgs doublets and with six Higgs doublets comprising two weakly coupled sets of three. In a leading approximation of S 3 cyclic permutation symmetry the three-Higgs-doublet model gives a open-quotes democraticclose quotes mass matrix of rank 1, while the six-Higgs-doublet model gives either a rank-1 mass matrix or, in the case when it spontaneously violates CP, a rank-2 mass matrix corresponding to nonzero second family masses. In both models, the CKM matrix is exactly unity in the leading approximation. Allowing small explicit violations of cyclic permutation symmetry generates small first family masses in the six-Higgs-doublet model, and first and second family masses in the three-Higgs-doublet model, and gives a nontrivial CKM matrix in which the mixings of the first and second family quarks are naturally larger than mixings involving the third family. Complete numerical fits are given for both models, flavor-changing neutral current constraints are discussed in detail, and the issues of unification of couplings and neutrino masses are addressed. On a technical level, our analysis uses the theory of circulant and retrocirculant matrices, the relevant parts of which are reviewed. copyright 1998 The American Physical Society

  10. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  11. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  12. DATA MINING METHODS FOR OMICS AND KNOWLEDGE OF CRUDE MEDICINAL PLANTS TOWARD BIG DATA BIOLOGY

    Directory of Open Access Journals (Sweden)

    Farit M. Afendi

    2013-01-01

    Full Text Available Molecular biological data has rapidly increased with the recent progress of the Omics fields, e.g., genomics, transcriptomics, proteomics and metabolomics that necessitates the development of databases and methods for efficient storage, retrieval, integration and analysis of massive data. The present study reviews the usage of KNApSAcK Family DB in metabolomics and related area, discusses several statistical methods for handling multivariate data and shows their application on Indonesian blended herbal medicines (Jamu as a case study. Exploration using Biplot reveals many plants are rarely utilized while some plants are highly utilized toward specific efficacy. Furthermore, the ingredients of Jamu formulas are modeled using Partial Least Squares Discriminant Analysis (PLS-DA in order to predict their efficacy. The plants used in each Jamu medicine served as the predictors, whereas the efficacy of each Jamu provided the responses. This model produces 71.6% correct classification in predicting efficacy. Permutation test then is used to determine plants that serve as main ingredients in Jamu formula by evaluating the significance of the PLS-DA coefficients. Next, in order to explain the role of plants that serve as main ingredients in Jamu medicines, information of pharmacological activity of the plants is added to the predictor block. Then N-PLS-DA model, multiway version of PLS-DA, is utilized to handle the three-dimensional array of the predictor block. The resulting N-PLS-DA model reveals that the effects of some pharmacological activities are specific for certain efficacy and the other activities are diverse toward many efficacies. Mathematical modeling introduced in the present study can be utilized in global analysis of big data targeting to reveal the underlying biology.

  13. Study on Reverse Reconstruction Method of Vehicle Group Situation in Urban Road Network Based on Driver-Vehicle Feature Evolution

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2017-01-01

    Full Text Available Vehicle group situation is the status and situation of dynamic permutation which is composed of target vehicle and neighboring traffic entities. It is a concept which is frequently involved in the research of traffic flow theory, especially the active vehicle security. Studying vehicle group situation in depth is of great significance for traffic safety. Three-lane condition was taken as an example; the characteristics of target vehicle and its neighboring vehicles were synthetically considered to restructure the vehicle group situation in this paper. The Gamma distribution theory was used to identify the vehicle group situation when target vehicle arrived at the end of the study area. From the perspective of driver-vehicle feature evolution, the reverse reconstruction method of vehicle group situation in the urban road network was proposed. Results of actual driving, virtual driving, and simulation experiments showed that the model established in this paper was reasonable and feasible.

  14. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  15. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  16. Gingival Retraction Methods: A Systematic Review.

    Science.gov (United States)

    Tabassum, Sadia; Adnan, Samira; Khan, Farhan Raza

    2017-12-01

    The aim of this systematic review was to assess the gingival retraction methods in terms of the amount of gingival retraction achieved and changes observed in various clinical parameters: gingival index (GI), plaque index (PI), probing depth (PD), and attachment loss (AL). Data sources included three major databases, PubMed, CINAHL plus (Ebsco), and Cochrane, along with hand search. Search was made using the key terms in different permutations of gingival retraction* AND displacement method* OR technique* OR agents OR material* OR medicament*. The initial search results yielded 145 articles which were narrowed down to 10 articles using a strict eligibility criteria of including clinical trials or experimental studies on gingival retraction methods with the amount of tooth structure gained and assessment of clinical parameters as the outcomes conducted on human permanent teeth only. Gingival retraction was measured in 6/10 studies whereas the clinical parameters were assessed in 5/10 studies. The total number of teeth assessed in the 10 included studies was 400. The most common method used for gingival retraction was chemomechanical. The results were heterogeneous with regards to the outcome variables. No method seemed to be significantly superior to the other in terms of gingival retraction achieved. Clinical parameters were not significantly affected by the gingival retraction method. © 2016 by the American College of Prosthodontists.

  17. An ensemble method for predicting subnuclear localizations from primary protein structures.

    Directory of Open Access Journals (Sweden)

    Guo Sheng Han

    Full Text Available BACKGROUND: Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. METHODOLOGY/PRINCIPAL FINDINGS: A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. CONCLUSIONS: It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method

  18. Problem of unstable pivots in the incomplete LU-conjugate gradient method

    International Nuclear Information System (INIS)

    Kershaw, D.S.

    1978-01-01

    Incomplete LU and incomplete-Cholesky conjugate gradient methods are becoming widely used in both laser and magnetic fusion research. In my original presentation of these methods, the problem of what to do if a pivot [L/sub ii/U/sub ii/) becomes very small or zero was raised and only partially answered by the suggestion that it be arbitrarily set to some non-zero value. In what follows it will be shown precisely how small the pivot can become before it must be fixed and precisely what value it should be set to in order to minimize the error in LU. Numerical examples will be given to show that not only does this prescription improve incomplete LU-conjugate gradient methods , but exact LU decomposition carried out with this prescription for handling small pivots and followed by a few linear or conjugate gradient iterations can be much faster than the permutations of rows and columns usually employed to circumvent small pivot problems

  19. SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tetiana D. Panchenko

    2016-07-01

    Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.

  20. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  1. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  2. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  3. It's the parameters, stupid! Moving beyond multi-model and multi-physics approaches to characterize and reduce predictive uncertainty in process-based hydrological models

    Science.gov (United States)

    Clark, Martyn; Samaniego, Luis; Freer, Jim

    2014-05-01

    Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic

  4. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  5. Microstructural changes in ischemic cortical gray matter predicted by a model of diffusion-weighted MRI

    DEFF Research Database (Denmark)

    Vestergaard-Poulsen, Peter; Hansen, Brian; Østergaard, Leif

    2007-01-01

    compartment. A global optimum was found from a wide range of parameter permutations using cluster computing. We also present simulations of cell swelling and changes of exchange rate and intracellular diffusion as possible cellular mechanisms in ischemia. RESULTS: Our model estimates an extracellular volume...... compartments and slow water exchange. Our model reproduces the signal changes observed in ischemia via physiologically credible mechanisms. CONCLUSION: Our modeling suggests that transverse relaxation has a profound influence on the diffusion attenuated MR signal. Our simulations indicate cell swelling...... model to the diffusion-weighted MR signal obtained from cortical gray matter in healthy subjects. Our model includes variable volume fractions, intracellular restriction effects, and exchange between compartments in addition to individual diffusion coefficients and transverse relaxation rates for each...

  6. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  7. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  8. Testing Multimodal Integration Hypotheses with Application to Schizophrenia Data

    DEFF Research Database (Denmark)

    Axelsen, Martin Christian; Bak, Nikolaj; Hansen, Lars Kai

    2015-01-01

    of the present paper is to propose a method for assessing these inter-modality dependencies. The approach is based on two permutations of an analyzed data set, each exploring different dependencies between and within modalities. The method was tested on the Kaggle MLSP 2014 Schizophrenia Classification Challenge...... data set which is composed of features from functional magnetic resonance imaging (MRI) and structural MRI. The results support the use of a permutation strategy for testing conditional dependencies between modalities in a multimodal classification problem....

  9. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  10. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  11. A Comparison of Surface Acoustic Wave Modeling Methods

    Science.gov (United States)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  12. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    Science.gov (United States)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  13. Direction of Coupling from Phases of Interacting Oscillators: A Permutation Information Approach

    Science.gov (United States)

    Bahraminasab, A.; Ghasemi, F.; Stefanovska, A.; McClintock, P. V. E.; Kantz, H.

    2008-02-01

    We introduce a directionality index for a time series based on a comparison of neighboring values. It can distinguish unidirectional from bidirectional coupling, as well as reveal and quantify asymmetry in bidirectional coupling. It is tested on a numerical model of coupled van der Pol oscillators, and applied to cardiorespiratory data from healthy subjects. There is no need for preprocessing and fine-tuning the parameters, which makes the method very simple, computationally fast and robust.

  14. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  15. Modelling a coal subcrop using the impedance method

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.A.; Thiel, D.V.; O' Keefe, S.G. [Griffith University, Nathan, Qld. (Australia). School of Microelectronic Engineering

    2000-07-01

    An impedance model was generated for two coal subcrops in the Biloela and Middlemount areas (Queensland, Australia). The model results were compared with actual surface impedance data. It was concluded that the impedance method satisfactorily modelled the surface response of the coal subcrops in two dimensions. There were some discrepancies between the field data and the model results, due to factors such as the method of discretization of the solution space in the impedance model and the lack of consideration of the three-dimensional nature of the coal outcrops. 10 refs., 8 figs.

  16. Amyloid oligomer structure characterization from simulations: A general method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  17. User manual for Blossom statistical package for R

    Science.gov (United States)

    Talbert, Marian; Cade, Brian S.

    2005-01-01

    Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.

  18. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  19. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  20. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  1. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  2. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  3. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  4. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  5. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  6. Cache memory modelling method and system

    OpenAIRE

    Posadas Cobo, Héctor; Villar Bonet, Eugenio; Díaz Suárez, Luis

    2011-01-01

    The invention relates to a method for modelling a data cache memory of a destination processor, in order to simulate the behaviour of said data cache memory during the execution of a software code on a platform comprising said destination processor. According to the invention, the simulation is performed on a native platform having a processor different from the destination processor comprising the aforementioned data cache memory to be modelled, said modelling being performed by means of the...

  7. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  8. On the trajectories of CRL...LR...R orbits, their period-doubling cascades and saddle-node bifurcation cascades

    International Nuclear Information System (INIS)

    Cerrada, Lucia; San Martin, Jesus

    2011-01-01

    In this Letter, it is shown that from a two region partition of the phase space of a one-dimensional dynamical system, a p-region partition can be obtained for the CRL...LR...R orbits. That is, permutations associated with symbolic sequences are obtained. As a consequence, the trajectory in phase space is directly deduced from permutation. From this permutation other permutations associated with period-doubling and saddle-node bifurcation cascades are derived, as well as other composite permutations. - Research highlights: → Symbolic sequences are the usual topological approach to dynamical systems. → Permutations bear more physical information than symbolic sequences. → Period-doubling cascade permutations associated with original sequences are obtained. → Saddle-node cascade permutations associated with original sequences are obtained. → Composite permutations are derived.

  9. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  10. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  11. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  12. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Fermion systems in discrete space-time

    International Nuclear Information System (INIS)

    Finster, Felix

    2007-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure

  14. Fermion systems in discrete space-time

    Energy Technology Data Exchange (ETDEWEB)

    Finster, Felix [NWF I - Mathematik, Universitaet Regensburg, 93040 Regensburg (Germany)

    2007-05-15

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  15. Fermion Systems in Discrete Space-Time

    OpenAIRE

    Finster, Felix

    2006-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  16. Fermion systems in discrete space-time

    Science.gov (United States)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  17. Modelling viscoacoustic wave propagation with the lattice Boltzmann method.

    Science.gov (United States)

    Xia, Muming; Wang, Shucheng; Zhou, Hui; Shan, Xiaowen; Chen, Hanming; Li, Qingqing; Zhang, Qingchen

    2017-08-31

    In this paper, the lattice Boltzmann method (LBM) is employed to simulate wave propagation in viscous media. LBM is a kind of microscopic method for modelling waves through tracking the evolution states of a large number of discrete particles. By choosing different relaxation times in LBM experiments and using spectrum ratio method, we can reveal the relationship between the quality factor Q and the parameter τ in LBM. A two-dimensional (2D) homogeneous model and a two-layered model are tested in the numerical experiments, and the LBM results are compared against the reference solution of the viscoacoustic equations based on the Kelvin-Voigt model calculated by finite difference method (FDM). The wavefields and amplitude spectra obtained by LBM coincide with those by FDM, which demonstrates the capability of the LBM with one relaxation time. The new scheme is relatively simple and efficient to implement compared with the traditional lattice methods. In addition, through a mass of experiments, we find that the relaxation time of LBM has a quantitative relationship with Q. Such a novel scheme offers an alternative forward modelling kernel for seismic inversion and a new model to describe the underground media.

  18. New method dynamically models hydrocarbon fractionation

    Energy Technology Data Exchange (ETDEWEB)

    Kesler, M.G.; Weissbrod, J.M.; Sheth, B.V. [Kesler Engineering, East Brunswick, NJ (United States)

    1995-10-01

    A new method for calculating distillation column dynamics can be used to model time-dependent effects of independent disturbances for a range of hydrocarbon fractionation. It can model crude atmospheric and vacuum columns, with relatively few equilibrium stages and a large number of components, to C{sub 3} splitters, with few components and up to 300 equilibrium stages. Simulation results are useful for operations analysis, process-control applications and closed-loop control in petroleum, petrochemical and gas processing plants. The method is based on an implicit approach, where the time-dependent variations of inventory, temperatures, liquid and vapor flows and compositions are superimposed at each time step on the steady-state solution. Newton-Raphson (N-R) techniques are then used to simultaneously solve the resulting finite-difference equations of material, equilibrium and enthalpy balances that characterize distillation dynamics. The important innovation is component-aggregation and tray-aggregation to contract the equations without compromising accuracy. This contraction increases the N-R calculations` stability. It also significantly increases calculational speed, which is particularly important in dynamic simulations. This method provides a sound basis for closed-loop, supervisory control of distillation--directly or via multivariable controllers--based on a rigorous, phenomenological column model.

  19. Investigating continuous time open pit dynamics

    OpenAIRE

    Askari-Nasab, H.; Frimpong, S.; Szymanski, J.

    2008-01-01

    Current mine production planning, scheduling, and allocation of resources are based on mathematical programming models. In practice, the optimized solution cannot be attained without examining all possible combinations and permutations of the extraction sequence. Operations research methods have limited applications in large-scale surface mining operations because the number of variables becomes too large. The primary objective of this study is to develop and implement a hybrid simulation fra...

  20. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  1. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  2. Diffuse interface methods for multiphase flow modeling

    International Nuclear Information System (INIS)

    Jamet, D.

    2004-01-01

    Full text of publication follows:Nuclear reactor safety programs need to get a better description of some stages of identified incident or accident scenarios. For some of them, such as the reflooding of the core or the dryout of fuel rods, the heat, momentum and mass transfers taking place at the scale of droplets or bubbles are part of the key physical phenomena for which a better description is needed. Experiments are difficult to perform at these very small scales and direct numerical simulations is viewed as a promising way to give new insight into these complex two-phase flows. This type of simulations requires numerical methods that are accurate, efficient and easy to run in three space dimensions and on parallel computers. Despite many years of development, direct numerical simulation of two-phase flows is still very challenging, mostly because it requires solving moving boundary problems. To avoid this major difficulty, a new class of numerical methods is arising, called diffuse interface methods. These methods are based on physical theories dating back to van der Waals and mostly used in materials science. In these methods, interfaces separating two phases are modeled as continuous transitions zones instead of surfaces of discontinuity. Since all the physical variables encounter possibly strong but nevertheless always continuous variations across the interfacial zones, these methods virtually eliminate the difficult moving boundary problem. We show that these methods lead to a single-phase like system of equations, which makes it easier to code in 3D and to make parallel compared to more classical methods. The first method presented is dedicated to liquid-vapor flows with phase-change. It is based on the van der Waals' theory of capillarity. This method has been used to study nucleate boiling of a pure fluid and of dilute binary mixtures. We discuss the importance of the choice and the meaning of the order parameter, i.e. a scalar which discriminates one

  3. Two updating methods for dissipative models with non symmetric matrices

    International Nuclear Information System (INIS)

    Billet, L.; Moine, P.; Aubry, D.

    1997-01-01

    In this paper the feasibility of the extension of two updating methods to rotating machinery models is considered, the particularity of rotating machinery models is to use non-symmetric stiffness and damping matrices. It is shown that the two methods described here, the inverse Eigen-sensitivity method and the error in constitutive relation method can be adapted to such models given some modification.As far as inverse sensitivity method is concerned, an error function based on the difference between right hand calculated and measured Eigen mode shapes and calculated and measured Eigen values is used. Concerning the error in constitutive relation method, the equation which defines the error has to be modified due to the non definite positiveness of the stiffness matrix. The advantage of this modification is that, in some cases, it is possible to focus the updating process on some specific model parameters. Both methods were validated on a simple test model consisting in a two-bearing and disc rotor system. (author)

  4. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  5. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  6. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  7. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  8. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  9. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  10. Optimizing methods for linking cinematic features to fMRI data.

    Science.gov (United States)

    Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia

    2015-04-15

    One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved

  11. Network modelling methods for FMRI.

    Science.gov (United States)

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  13. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  14. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  15. Using 3-color chromosome painting to decide between chromosome aberration models

    International Nuclear Information System (INIS)

    Lucas, J.N.; Sachs, R.K.

    1993-01-01

    Ionizing radiation produces chromosome aberrations when DNA double strand breaks (DSB) interact pairwise. For more than 30 years there have been two main, competing theories of such binary DSB interactions. The classical theory asserts that an unrepaired DSB makes two ends which separate, with each end subsequently able to join any similar (non-telomeric) end. The exchange theory asserts that the two DSB ends remain associated until repair or a reciprocal chromosome exchange involving a second DSB occurs. The authors conducted an experiment to test these models, using 3-color chromosome painting. After in vitro irradiation of resting human lymphocytes, they observed cells with three-color triplets at first metaphase: three derivative chromosomes having permuted colors, as if three broken chromosomes had played musical chairs. On the exchange model in its standard form such 3-color triplets cannot occur. On the classical model the expected frequency can be calculated. They report data and computer calculations which exclude the exchange model and favor the classical model

  16. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  17. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  18. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  19. A meta-heuristic method for solving scheduling problem: crow search algorithm

    Science.gov (United States)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  20. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    Science.gov (United States)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  1. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  2. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  3. Down in the dumps - cheerfully. [Methods for the recovery of uranium and gold from tailings

    Energy Technology Data Exchange (ETDEWEB)

    Ruhmer, W.T. (Council for Mineral Technology, Randburg (South Africa))

    1984-07-01

    Scavenging gold and uranium from the sands dumps and slimes dams is a highly profitable operation. In a few pages the author summarises some of the permutations and combinations of the various processes employed by South African concerns to achieve economic results.

  4. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  5. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  6. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  7. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  8. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  9. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  10. Development of four-component synthesis of tetra- and pentasubstituted polyfunctional dihydropyrroles: free permutation and combination of aromatic and aliphatic amines.

    Science.gov (United States)

    Lv, Longyun; Zheng, Sichao; Cai, Xiaotie; Chen, Zhipeng; Zhu, Qiuhua; Liu, Shuwen

    2013-04-08

    We previously reported the novel efficient proton/heat-promoted four-component reactions (4CRs) of but-2-ynedioates, two same/different primary amines, and aldehydes for the synthesis of tetra- and pentasubstituted polyfunctional dihydropyrroles. If aromatic and aliphatic amines were used as reagents, four different series of products should be obtained via the permutation and combination of aromatic and aliphatic primary amines. However, only three/two rather four different series of tetra-/pentasubstisuted dihydropyrroles could be prepared via the proton/heat-promoted 4CRs. Herein, Cu(OAc)2·H2O, a Lewis acid being stable in air and water, was found to be an efficient catalyst for the 4CR synthesis of all the four different series of tetra-/pentasubstisuted dihydropyrroles. The copper-catalyzed 4CR could produce target products at room temperature in good to excellent yields. Interestingly, benzaldehyde, in addition to being used as a useful reactant for the synthesis of pentasubstituted dihydropyrroles, was found to be an excellent additive for preventing the oxidation of aromatic amines with copper(II) and ensuring the sooth conduct of the 4CRs for the synthesis of tetrasubstituted dihydropyrroles with aryl R(3). In addition, salicylic acid was found to be needed to increase the activities and yields of the copper-catalyzed 4CRs for the synthesis of petasubstituted diyhydropyrroles. On the basis of experimental results, the enamination/amidation/intramolecular cyclization mechanism was proposed and amidation is expected to be the rate-limited step in the copper-catalyzed 4CRs.

  11. Annular dispersed flow analysis model by Lagrangian method and liquid film cell method

    International Nuclear Information System (INIS)

    Matsuura, K.; Kuchinishi, M.; Kataoka, I.; Serizawa, A.

    2003-01-01

    A new annular dispersed flow analysis model was developed. In this model, both droplet behavior and liquid film behavior were simultaneously analyzed. Droplet behavior in turbulent flow was analyzed by the Lagrangian method with refined stochastic model. On the other hand, liquid film behavior was simulated by the boundary condition of moving rough wall and liquid film cell model, which was used to estimate liquid film flow rate. The height of moving rough wall was estimated by disturbance wave height correlation. In each liquid film cell, liquid film flow rate was calculated by considering droplet deposition and entrainment flow rate. Droplet deposition flow rate was calculated by Lagrangian method and entrainment flow rate was calculated by entrainment correlation. For the verification of moving rough wall model, turbulent flow analysis results under the annular flow condition were compared with the experimental data. Agreement between analysis results and experimental results were fairly good. Furthermore annular dispersed flow experiments were analyzed, in order to verify droplet behavior model and the liquid film cell model. The experimental results of radial distribution of droplet mass flux were compared with analysis results. The agreement was good under low liquid flow rate condition and poor under high liquid flow rate condition. But by modifying entrainment rate correlation, the agreement become good even under high liquid flow rate. This means that basic analysis method of droplet and liquid film behavior was right. In future work, verification calculation should be carried out under different experimental condition and entrainment ratio correlation also should be corrected

  12. Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies

    Directory of Open Access Journals (Sweden)

    Feng Xiao-Jiang

    2008-10-01

    Full Text Available Abstract Background The analysis of large-scale data sets via clustering techniques is utilized in a number of applications. Biclustering in particular has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Biclustering algorithms also have important applications in sample classification where, for instance, tissue samples can be classified as cancerous or normal. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the "best" grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. Results In this article, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. Cluster boundaries in one dimension are used to partition and re-order the other dimensions of the corresponding submatrices to generate biclusters. The performance of OREO is tested on (a metabolite concentration data, (b an image reconstruction matrix, (c synthetic data with implanted biclusters, and gene expression data for (d colon cancer data, (e breast cancer data, as well as (f yeast segregant data to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. Conclusion We demonstrate that this rigorous global optimization method for biclustering produces clusters with more insightful groupings of similar entities, such as genes or metabolites sharing common functions, than other clustering and biclustering algorithms and can reconstruct underlying fundamental patterns in the data for several distinct sets of data matrices arising

  13. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  14. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  15. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  16. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  17. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  18. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  19. Sensitivity analysis of infectious disease models: methods, advances and their application

    Science.gov (United States)

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  20. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  1. Observability-Enhanced PMU Placement Considering Conventional Measurements and Contingencies

    Directory of Open Access Journals (Sweden)

    M. Esmaili

    2014-12-01

    Full Text Available Phasor Measurement Units (PMUs are in growing attention in recent power systems because of their paramount abilities in state estimation. PMUs are placed in existing power systems where there are already installed conventional measurements, which can be helpful if they are considered in PMU optimal placement. In this paper, a method is proposed for optimal placement of PMUs incorporating conventional measurements of zero injection buses and branch flow measurements using a permutation matrix. Furthermore, the effect of single branch outage and single PMU failure is included in the proposed method. When a branch with a flow measurement goes out, the network loses one observability path (the branch and one conventional measurement (the flow measurement. The permutation matrix proposed here is able to model the outage of a branch equipped with a flow measurement or connected to a zero injection bus. Also, measurement redundancy, and consequently measurement reliability, is enhanced without increasing the number of PMUs this implies a more efficient usage of PMUs than previous methods. The PMU placement problem is formulated as a mixed-integer linear programming that results in the global optimal solution. Results obtained from testing the proposed method on four well-known test systems in diverse situations confirm its efficiency.

  2. Is different better? Models of teaching and their influence on the net financial outcome for general practice teaching posts

    Directory of Open Access Journals (Sweden)

    Cheah Carolyn

    2011-07-01

    Full Text Available Abstract Background In Australia, training for general practice (GP occurs within private practices and their involvement in teaching can have significant financial costs. At the same time there are growing demands for clinical places for all disciplines and for GP there is concern that there are insufficient teaching practices to meet the demand at the medical student, prevocational and vocational training levels. One option to address this may be to change how teaching occurs in the practice. A question that arises in posing such an option is whether different models of teaching change the costs for a teaching practice. The aim of this study is to determine the net financial outcome of teaching models in private GP. Methods Modelling the financial implications for a range of teaching options using a costing framework developed from a survey of teaching practices in South Australia. Each option was compared with the traditional model of teaching where one GP supervisor is singularly responsible for one learner. The main outcome measure was net financial outcome per week. Decisions on the model cost parameters were made by the study's Steering Group which comprised of experienced GP supervisors. Four teaching models are presented. Model 1 investigates the gains from teaching multiple same level learners, Models 2 and 3, the benefits of vertically integrated teaching using different permutations, and Model 4 the concept of a GP teacher who undertakes all the teaching. Results There was a significant increase in net benefits of Aus$547 per week (95% confidence intervals $459, $668 to the practice when a GP taught two same level learners (Model 1 and when a senior registrar participated in teaching a prevocational doctor (Model 3, Aus$263, 95% confidence intervals $80, $570. For Model 2, a practice could significantly reduce the loss if a registrar was involved in vertically integrated teaching which included the training of a medical student (Aus

  3. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  4. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  5. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  6. Inference With Difference-in-Differences With a Small Number of Groups: A Review, Simulation Study, and Empirical Application Using SHARE Data.

    Science.gov (United States)

    Rokicki, Slawa; Cohen, Jessica; Fink, Günther; Salomon, Joshua A; Landrum, Mary Beth

    2018-01-01

    Difference-in-differences (DID) estimation has become increasingly popular as an approach to evaluate the effect of a group-level policy on individual-level outcomes. Several statistical methodologies have been proposed to correct for the within-group correlation of model errors resulting from the clustering of data. Little is known about how well these corrections perform with the often small number of groups observed in health research using longitudinal data. First, we review the most commonly used modeling solutions in DID estimation for panel data, including generalized estimating equations (GEE), permutation tests, clustered standard errors (CSE), wild cluster bootstrapping, and aggregation. Second, we compare the empirical coverage rates and power of these methods using a Monte Carlo simulation study in scenarios in which we vary the degree of error correlation, the group size balance, and the proportion of treated groups. Third, we provide an empirical example using the Survey of Health, Ageing, and Retirement in Europe. When the number of groups is small, CSE are systematically biased downwards in scenarios when data are unbalanced or when there is a low proportion of treated groups. This can result in over-rejection of the null even when data are composed of up to 50 groups. Aggregation, permutation tests, bias-adjusted GEE, and wild cluster bootstrap produce coverage rates close to the nominal rate for almost all scenarios, though GEE may suffer from low power. In DID estimation with a small number of groups, analysis using aggregation, permutation tests, wild cluster bootstrap, or bias-adjusted GEE is recommended.

  7. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  8. A survey of real face modeling methods

    Science.gov (United States)

    Liu, Xiaoyue; Dai, Yugang; He, Xiangzhen; Wan, Fucheng

    2017-09-01

    The face model has always been a research challenge in computer graphics, which involves the coordination of multiple organs in faces. This article explained two kinds of face modeling method which is based on the data driven and based on parameter control, analyzed its content and background, summarized their advantages and disadvantages, and concluded muscle model which is based on the anatomy of the principle has higher veracity and easy to drive.

  9. Development of a time-trend model for analyzing and predicting case-pattern of Lassa fever epidemics in Liberia, 2013-2017.

    Science.gov (United States)

    Olugasa, Babasola O; Odigie, Eugene A; Lawani, Mike; Ojo, Johnson F

    2015-01-01

    The objective was to develop a case-pattern model for Lassa fever (LF) among humans and derive predictors of time-trend point distribution of LF cases in Liberia in view of the prevailing under-reporting and public health challenge posed by the disease in the country. A retrospective 5 years data of LF distribution countrywide among humans were used to train a time-trend model of the disease in Liberia. A time-trend quadratic model was selected due to its goodness-of-fit (R2 = 0.89, and P Liberia, on which a predictive model was developed. We proposed a computationally feasible two-stage space-time permutation approach to estimate the time-trend parameters and conduct predictive inference on LF in Liberia.

  10. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  11. Generalized framework for context-specific metabolic model extraction methods

    Directory of Open Access Journals (Sweden)

    Semidán eRobaina Estévez

    2014-09-01

    Full Text Available Genome-scale metabolic models are increasingly applied to investigate the physiology not only of simple prokaryotes, but also eukaryotes, such as plants, characterized with compartmentalized cells of multiple types. While genome-scale models aim at including the entirety of known metabolic reactions, mounting evidence has indicated that only a subset of these reactions is active in a given context, including: developmental stage, cell type, or environment. As a result, several methods have been proposed to reconstruct context-specific models from existing genome-scale models by integrating various types of high-throughput data. Here we present a mathematical framework that puts all existing methods under one umbrella and provides the means to better understand their functioning, highlight similarities and differences, and to help users in selecting a most suitable method for an application.

  12. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  13. Robust modelling of solubility in supercritical carbon dioxide using Bayesian methods.

    Science.gov (United States)

    Tarasova, Anna; Burden, Frank; Gasteiger, Johann; Winkler, David A

    2010-04-01

    Two sparse Bayesian methods were used to derive predictive models of solubility of organic dyes and polycyclic aromatic compounds in supercritical carbon dioxide (scCO(2)), over a wide range of temperatures (285.9-423.2K) and pressures (60-1400 bar): a multiple linear regression employing an expectation maximization algorithm and a sparse prior (MLREM) method and a non-linear Bayesian Regularized Artificial Neural Network with a Laplacian Prior (BRANNLP). A randomly selected test set was used to estimate the predictive ability of the models. The MLREM method resulted in a model of similar predictivity to the less sparse MLR method, while the non-linear BRANNLP method created models of substantially better predictivity than either the MLREM or MLR based models. The BRANNLP method simultaneously generated context-relevant subsets of descriptors and a robust, non-linear quantitative structure-property relationship (QSPR) model for the compound solubility in scCO(2). The differences between linear and non-linear descriptor selection methods are discussed. (c) 2009 Elsevier Inc. All rights reserved.

  14. Methods improvements incorporated into the SAPHIRE ASP models

    International Nuclear Information System (INIS)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    1995-01-01

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3) enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements

  15. Model-based economic evaluation in Alzheimer's disease: a review of the methods available to model Alzheimer's disease progression.

    Science.gov (United States)

    Green, Colin; Shearer, James; Ritchie, Craig W; Zajicek, John P

    2011-01-01

    To consider the methods available to model Alzheimer's disease (AD) progression over time to inform on the structure and development of model-based evaluations, and the future direction of modelling methods in AD. A systematic search of the health care literature was undertaken to identify methods to model disease progression in AD. Modelling methods are presented in a descriptive review. The literature search identified 42 studies presenting methods or applications of methods to model AD progression over time. The review identified 10 general modelling frameworks available to empirically model the progression of AD as part of a model-based evaluation. Seven of these general models are statistical models predicting progression of AD using a measure of cognitive function. The main concerns with models are on model structure, around the limited characterization of disease progression, and on the use of a limited number of health states to capture events related to disease progression over time. None of the available models have been able to present a comprehensive model of the natural history of AD. Although helpful, there are serious limitations in the methods available to model progression of AD over time. Advances are needed to better model the progression of AD and the effects of the disease on peoples' lives. Recent evidence supports the need for a multivariable approach to the modelling of AD progression, and indicates that a latent variable analytic approach to characterising AD progression is a promising avenue for advances in the statistical development of modelling methods. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Proof of concept and dose estimation with binary responses under model uncertainty.

    Science.gov (United States)

    Klingenberg, B

    2009-01-30

    This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.

  17. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  18. Spatial autocorrelation method using AR model; Kukan jiko sokanho eno AR model no tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, H; Obuchi, T; Saito, T [Iwate University, Iwate (Japan). Faculty of Engineering

    1996-05-01

    Examination was made about the applicability of the AR model to the spatial autocorrelation (SAC) method, which analyzes the surface wave phase velocity in a microtremor, for the estimation of the underground structure. In this examination, microtremor data recorded in Morioka City, Iwate Prefecture, was used. In the SAC method, a spatial autocorrelation function with the frequency as a variable is determined from microtremor data observed by circular arrays. Then, the Bessel function is adapted to the spatial autocorrelation coefficient with the distance between seismographs as a variable for the determination of the phase velocity. The result of the AR model application in this study and the results of the conventional BPF and FFT method were compared. It was then found that the phase velocities obtained by the BPF and FFT methods were more dispersed than the same obtained by the AR model. The dispersion in the BPF method is attributed to the bandwidth used in the band-pass filter and, in the FFT method, to the impact of the bandwidth on the smoothing of the cross spectrum. 2 refs., 7 figs.

  19. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  20. A novel color image encryption scheme using fractional-order hyperchaotic system and DNA sequence operations

    International Nuclear Information System (INIS)

    Zhang Li-Min; Sun Ke-Hui; Liu Wen-Hao; He Shao-Bo

    2017-01-01

    In this paper, Adomian decomposition method (ADM) with high accuracy and fast convergence is introduced to solve the fractional-order piecewise-linear (PWL) hyperchaotic system. Based on the obtained hyperchaotic sequences, a novel color image encryption algorithm is proposed by employing a hybrid model of bidirectional circular permutation and DNA masking. In this scheme, the pixel positions of image are scrambled by circular permutation, and the pixel values are substituted by DNA sequence operations. In the DNA sequence operations, addition and substraction operations are performed according to traditional addition and subtraction in the binary, and two rounds of addition rules are used to encrypt the pixel values. The simulation results and security analysis show that the hyperchaotic map is suitable for image encryption, and the proposed encryption algorithm has good encryption effect and strong key sensitivity. It can resist brute-force attack, statistical attack, differential attack, known-plaintext, and chosen-plaintext attacks. (paper)

  1. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  2. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  3. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  4. Application of homotopy-perturbation method to nonlinear population dynamics models

    International Nuclear Information System (INIS)

    Chowdhury, M.S.H.; Hashim, I.; Abdulaziz, O.

    2007-01-01

    In this Letter, the homotopy-perturbation method (HPM) is employed to derive approximate series solutions of nonlinear population dynamics models. The nonlinear models considered are the multispecies Lotka-Volterra equations. The accuracy of this method is examined by comparison with the available exact and the fourth-order Runge-Kutta method (RK4)

  5. A meshless method for modeling convective heat transfer

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David B [Los Alamos National Laboratory

    2010-01-01

    A meshless method is used in a projection-based approach to solve the primitive equations for fluid flow with heat transfer. The method is easy to implement in a MATLAB format. Radial basis functions are used to solve two benchmark test cases: natural convection in a square enclosure and flow with forced convection over a backward facing step. The results are compared with two popular and widely used commercial codes: COMSOL, a finite element model, and FLUENT, a finite volume-based model.

  6. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  7. RS-SNP: a random-set method for genome-wide association studies

    Directory of Open Access Journals (Sweden)

    Mukherjee Sayan

    2011-03-01

    Full Text Available Abstract Background The typical objective of Genome-wide association (GWA studies is to identify single-nucleotide polymorphisms (SNPs and corresponding genes with the strongest evidence of association (the 'most-significant SNPs/genes' approach. Borrowing ideas from micro-array data analysis, we propose a new method, named RS-SNP, for detecting sets of genes enriched in SNPs moderately associated to the phenotype. RS-SNP assesses whether the number of significant SNPs, with p-value P ≤ α, belonging to a given SNP set is statistically significant. The rationale of proposed method is that two kinds of null hypotheses are taken into account simultaneously. In the first null model the genotype and the phenotype are assumed to be independent random variables and the null distribution is the probability of the number of significant SNPs in greater than observed by chance. The second null model assumes the number of significant SNPs in depends on the size of and not on the identity of the SNPs in . Statistical significance is assessed using non-parametric permutation tests. Results We applied RS-SNP to the Crohn's disease (CD data set collected by the Wellcome Trust Case Control Consortium (WTCCC and compared the results with GENGEN, an approach recently proposed in literature. The enrichment analysis using RS-SNP and the set of pathways contained in the MSigDB C2 CP pathway collection highlighted 86 pathways rich in SNPs weakly associated to CD. Of these, 47 were also indicated to be significant by GENGEN. Similar results were obtained using the MSigDB C5 pathway collection. Many of the pathways found to be enriched by RS-SNP have a well-known connection to CD and often with inflammatory diseases. Conclusions The proposed method is a valuable alternative to other techniques for enrichment analysis of SNP sets. It is well founded from a theoretical and statistical perspective. Moreover, the experimental comparison with GENGEN highlights that it is

  8. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Science.gov (United States)

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  9. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    Science.gov (United States)

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  10. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    Science.gov (United States)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  11. Engineering design of systems models and methods

    CERN Document Server

    Buede, Dennis M

    2009-01-01

    The ideal introduction to the engineering design of systems-now in a new edition. The Engineering Design of Systems, Second Edition compiles a wealth of information from diverse sources to provide a unique, one-stop reference to current methods for systems engineering. It takes a model-based approach to key systems engineering design activities and introduces methods and models used in the real world. Features new to this edition include: * The addition of Systems Modeling Language (SysML) to several of the chapters, as well as the introduction of new terminology * Additional material on partitioning functions and components * More descriptive material on usage scenarios based on literature from use case development * Updated homework assignments * The software product CORE (from Vitech Corporation) is used to generate the traditional SE figures and the software product MagicDraw UML with SysML plugins (from No Magic, Inc.) is used for the SysML figures This book is designed to be an introductory reference ...

  12. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  13. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  14. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    Science.gov (United States)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  15. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  16. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  17. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  18. Evaluation of two updating methods for dissipative models on a real structure

    International Nuclear Information System (INIS)

    Moine, P.; Billet, L.

    1996-01-01

    Finite Element Models are widely used to predict the dynamic behaviour from structures. Frequently, the model does not represent the structure with all be expected accuracy i.e. the measurements realised on the structure differ from the data predicted by the model. It is therefore necessary to update the model. Although many modeling errors come from inadequate representation of the damping phenomena, most of the model updating techniques are up to now restricted to conservative models only. In this paper, we present two updating methods for dissipative models using Eigen mode shapes and Eigen values as behavioural information from the structure. The first method - the modal output error method - compares directly the experimental Eigen vectors and Eigen values to the model Eigen vectors and Eigen values whereas the second method - the error in constitutive relation method - uses an energy error derived from the equilibrium relation. The error function, in both cases, is minimized by a conjugate gradient algorithm and the gradient is calculated analytically. These two methods behave differently which can be evidenced by updating a real structure constituted from a piece of pipe mounted on two viscous elastic suspensions. The updating of the model validates an updating strategy consisting in realizing a preliminary updating with the error in constitutive relation method (a fast to converge but difficult to control method) and then to pursue the updating with the modal output error method (a slow to converge but reliable and easy to control method). Moreover the problems encountered during the updating process and their corresponding solutions are given. (authors)

  19. Event-based soil loss models for construction sites

    Science.gov (United States)

    Trenouth, William R.; Gharabaghi, Bahram

    2015-05-01

    The elevated rates of soil erosion stemming from land clearing and grading activities during urban development, can result in excessive amounts of eroded sediments entering waterways and causing harm to the biota living therein. However, construction site event-based soil loss simulations - required for reliable design of erosion and sediment controls - are one of the most uncertain types of hydrologic models. This study presents models with improved degree of accuracy to advance the design of erosion and sediment controls for construction sites. The new models are developed using multiple linear regression (MLR) on event-based permutations of the Universal Soil Loss Equation (USLE) and artificial neural networks (ANN). These models were developed using surface runoff monitoring datasets obtained from three sites - Greensborough, Cookstown, and Alcona - in Ontario and datasets mined from the literature for three additional sites - Treynor, Iowa, Coshocton, Ohio and Cordoba, Spain. The predictive MLR and ANN models can serve as both diagnostic and design tools for the effective sizing of erosion and sediment controls on active construction sites, and can be used for dynamic scenario forecasting when considering rapidly changing land use conditions during various phases of construction.

  20. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  1. On Optimal Data Split for Generalization Estimation and Model Selection

    DEFF Research Database (Denmark)

    Larsen, Jan; Goutte, Cyril

    1999-01-01

    The paper is concerned with studying the very different behavior of the two data splits using hold-out cross-validation, K-fold cross-validation and randomized permutation cross-validation. First we describe the theoretical basics of various cross-validation techniques with the purpose of reliably...

  2. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  3. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    Science.gov (United States)

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  4. Combustion Model and Control Parameter Optimization Methods for Single Cylinder Diesel Engine

    Directory of Open Access Journals (Sweden)

    Bambang Wahono

    2014-01-01

    Full Text Available This research presents a method to construct a combustion model and a method to optimize some control parameters of diesel engine in order to develop a model-based control system. The construction purpose of the model is to appropriately manage some control parameters to obtain the values of fuel consumption and emission as the engine output objectives. Stepwise method considering multicollinearity was applied to construct combustion model with the polynomial model. Using the experimental data of a single cylinder diesel engine, the model of power, BSFC, NOx, and soot on multiple injection diesel engines was built. The proposed method succesfully developed the model that describes control parameters in relation to the engine outputs. Although many control devices can be mounted to diesel engine, optimization technique is required to utilize this method in finding optimal engine operating conditions efficiently beside the existing development of individual emission control methods. Particle swarm optimization (PSO was used to calculate control parameters to optimize fuel consumption and emission based on the model. The proposed method is able to calculate control parameters efficiently to optimize evaluation item based on the model. Finally, the model which added PSO then was compiled in a microcontroller.

  5. Theoretical methods and models for mechanical properties of soft biomaterials

    Directory of Open Access Journals (Sweden)

    Zhonggang Feng

    2017-06-01

    Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.

  6. A matrix approach to the statistics of longevity in heterogeneous frailty models

    Directory of Open Access Journals (Sweden)

    Hal Caswell

    2014-09-01

    Full Text Available Background: The gamma-Gompertz model is a fixed frailty model in which baseline mortality increasesexponentially with age, frailty has a proportional effect on mortality, and frailty at birth follows a gamma distribution. Mortality selects against the more frail, so the marginal mortality rate decelerates, eventually reaching an asymptote. The gamma-Gompertz is one of a wider class of frailty models, characterized by the choice of baseline mortality, effects of frailty, distributions of frailty, and assumptions about the dynamics of frailty. Objective: To develop a matrix model to compute all the statistical properties of longevity from thegamma-Gompertz and related models. Methods: I use the vec-permutation matrix formulation to develop a model in which individuals are jointly classified by age and frailty. The matrix is used to project the age and frailty dynamicsof a cohort and the fundamental matrix is used to obtain the statistics of longevity. Results: The model permits calculation of the mean, variance, coefficient of variation, skewness and all moments of longevity, the marginal mortality and survivorship functions, the dynamics of the frailty distribution, and other quantities. The matrix formulation extends naturally to other frailty models. I apply the analysis to the gamma-Gompertz model (for humans and laboratory animals, the gamma-Makeham model, and the gamma-Siler model, and to a hypothetical dynamic frailty model characterized by diffusion of frailty with reflecting boundaries.The matrix model permits partitioning the variance in longevity into components due to heterogeneity and to individual stochasticity. In several published human data sets, heterogeneity accounts for less than 10Š of the variance in longevity. In laboratory populations of five invertebrate animal species, heterogeneity accounts for 46Š to 83Š ofthe total variance in longevity.

  7. Extending existing structural identifiability analysis methods to mixed-effects models.

    Science.gov (United States)

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  9. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  10. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...

  11. Acceptance of Addiction Prevention Exiting Methods and Presentation of Appropriate Model

    Directory of Open Access Journals (Sweden)

    Ali Asghar Savad-Kouhi

    2006-10-01

    Full Text Available Objective: The aim of this study is assessment of acceptance of addiction prevention existing methods and design and present of appropriate model. Materials & Methods: This research has done by survey and desariptive method by using questionnaire we assessed knowledge and belief of people about suggesting and existing methods of addiction prevention and their acceptence and finally design new and appropriate model of addiction prevention. For designing questionnaire, first exports and professors were openly interviewed and according their views final questionnaire was planned. We used questionnaire with 2 open ended and 61 close-ended tests for gathering data. The subjects of research were 2500 persons 13-35 years old that were selected by randomized sampling from 15 provinces. Results: The findings showed that according to people who were studied, they have positive beliefs about prevention methods and their effectiveness. According to findings a good model is inclusive model that able to do in four level: knowledge, change believe and attitude, control and change behavior. Conclusion: The people of study belive that acceptance of suggesting and existing methods of addiction prevention are effective direct and indirect to others, and appropriate model is inclusive model.

  12. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  13. Analysis of elasto-plasticity of a reinforced framework. Report 4. Framework reinforced by a wing wall made of post-placed concrete; Hokyo honegumi no dansosei kaiseki. 4. Atouchi sodekabe ni yoru hokyo honegumi

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, H. [Japan Testing Center for Construction Materials, Tokyo (Japan); Shimizu, Y. [Tokyo Institute of Technology, Tokyo (Japan)

    1995-09-01

    An elasto-plasticity stress analysis was performed on reinforcement using a wing wall made of post-placed concrete as an anti-earthquake reinforcement method for ferro-concrete structures. The analytical values were compared with experimental values, and discussions were given on reasonability of the analytic method. Wing walls made of post-placed concrete were inserted into a three-layered single-spanned model of a reinforced framework made of reinforced mortar (the wing walls being three kinds comprising 1/4 span portion in the first layer, and 1/4 portions in the first and second layers). With the columns loaded and retained with an axial force of 3 tf, a horizontal force was applied from one direction under an assumption that the force forms a uniform distribution. The analysis model was permuted with a wire material having one nodal point and three degrees of freedom as has been reported in the previous paper. The model was divided into the wing wall and the column, and the wing wall portion was permuted into a brace having pins at both ends so that the brace has the withstand strength equivalent to that of the wing wall. A tri-linear type or a bi-linear type was hypothesized for restoring force characteristics of each portion. According to the analytical result, the analytical values agreed relatively well with the experimental values in the load-deformation relationship and fracture conditions, verifying the reasonability of the analytical hypothesis. 5 refs., 5 figs., 3 tabs.

  14. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  15. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  16. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  17. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  18. The Langevin method and Hubbard-like models

    International Nuclear Information System (INIS)

    Gross, M.; Hamber, H.

    1989-01-01

    The authors reexamine the difficulties associated with application of the Langevin method to numerical simulation of models with non-positive definite statistical weights, including the Hubbard model. They show how to avoid the violent crossing of the zeroes of the weight and how to move those nodes away from the real axis. However, it still appears necessary to keep track of the sign (or phase) of the weight

  19. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  20. A method to couple HEM and HRM two-phase flow models

    Energy Technology Data Exchange (ETDEWEB)

    Herard, J.M.; Hurisse, O. [Elect France, Div Rech and Dev, Dept Mecan Fluides Energies and Environm, F-78401 Chatou (France); Hurisse, O. [Univ Aix Marseille 1, Ctr Math and Informat, Lab Anal Topol and Probabil, CNRS, UMR 6632, F-13453 Marseille 13 (France); Ambroso, A. [CEA Saclay, DEN, DM2S, SFME, LETR, 91 - Gif sur Yvette (France)

    2009-04-15

    We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)

  1. A method to couple HEM and HRM two-phase flow models

    International Nuclear Information System (INIS)

    Herard, J.M.; Hurisse, O.; Hurisse, O.; Ambroso, A.

    2009-01-01

    We present a method for the unsteady coupling of two distinct two-phase flow models (namely the Homogeneous Relaxation Model, and the Homogeneous Equilibrium Model) through a thin interface. The basic approach relies on recent works devoted to the interfacial coupling of CFD models, and thus requires to introduce an interface model. Many numerical test cases enable to investigate the stability of the coupling method. (authors)

  2. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  3. Comparative analysis of various methods for modelling permanent magnet machines

    NARCIS (Netherlands)

    Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.

    2017-01-01

    In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air

  4. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  5. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  6. Method for modeling social care processes for national information exchange.

    Science.gov (United States)

    Miettinen, Aki; Mykkänen, Juha; Laaksonen, Maarit

    2012-01-01

    Finnish social services include 21 service commissions of social welfare including Adoption counselling, Income support, Child welfare, Services for immigrants and Substance abuse care. This paper describes the method used for process modeling in the National project for IT in Social Services in Finland (Tikesos). The process modeling in the project aimed to support common national target state processes from the perspective of national electronic archive, increased interoperability between systems and electronic client documents. The process steps and other aspects of the method are presented. The method was developed, used and refined during the three years of process modeling in the national project.

  7. Solving the nuclear shell model with an algebraic method

    International Nuclear Information System (INIS)

    Feng, D.H.; Pan, X.W.; Guidry, M.

    1997-01-01

    We illustrate algebraic methods in the nuclear shell model through a concrete example, the fermion dynamical symmetry model (FDSM). We use this model to introduce important concepts such as dynamical symmetry, symmetry breaking, effective symmetry, and diagonalization within a higher-symmetry basis. (orig.)

  8. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  9. Numerical Modelling of the Special Light Source with Novel R-FEM Method

    Directory of Open Access Journals (Sweden)

    Pavel Fiala

    2008-01-01

    Full Text Available This paper presents information about new directions in the modelling of lighting systems, and an overview of methods for the modelling of lighting systems. The novel R-FEM method is described, which is a combination of the Radiosity method and the Finite Elements Method (FEM. The paper contains modelling results and their verification by experimental measurements and by the Matlab simulation for this R-FEM method.

  10. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  11. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  12. Failed rib region prediction in a human body model during crash events with precrash braking.

    Science.gov (United States)

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be

  13. Method for modeling post-mortem biometric 3D fingerprints

    Science.gov (United States)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.

    2016-05-01

    Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.

  14. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  15. Fuzzy cross-model cross-mode method and its application to update the finite element model of structures

    International Nuclear Information System (INIS)

    Liu Yang; Xu Dejian; Li Yan; Duan Zhongdong

    2011-01-01

    As a novel updating technique, cross-model cross-mode (CMCM) method possesses a high efficiency and capability of flexible selecting updating parameters. However, the success of this method depends on the accuracy of measured modal shapes. Usually, the measured modal shapes are inaccurate since many kinds of measured noises are inevitable. Furthermore, the complete testing modal shapes are required by CMCM method so that the calculating errors may be introduced into the measured modal shapes by conducting the modal expansion or model reduction technique. Therefore, this algorithm is faced with the challenge of updating the finite element (FE) model of practical complex structures. In this study, the fuzzy CMCM method is proposed in order to weaken the effect of errors of the measured modal shapes on the updated results. Then two simulated examples are applied to compare the performance of the fuzzy CMCM method with the CMCM method. The test results show that proposed method is more promising to update the FE model of practical structures than CMCM method.

  16. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  17. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  18. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  19. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...