WorldWideScience

Sample records for models permutation methods

  1. Permutations

    International Nuclear Information System (INIS)

    Arnold, Vladimir I

    2009-01-01

    Decompositions into cycles for random permutations of a large number of elements are very different (in their statistics) from the same decompositions for algebraic permutations (defined by linear or projective transformations of finite sets). This paper presents tables giving both these and other statistics, as well as a comparison of them with the statistics of involutions or permutations with all their cycles of even length. The inclusions of a point in cycles of various lengths turn out to be equiprobable events for random permutations. The number of permutations of 2N elements with all cycles of even length turns out to be the square of an integer (namely, of (2N-1)!!). The number of cycles of projective permutations (over a field with an odd prime number of elements) is always even. These and other empirically discovered theorems are proved in the paper. Bibliography: 6 titles.

  2. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  3. A chronicle of permutation statistical methods 1920–2000, and beyond

    CERN Document Server

    Berry, Kenneth J; Mielke Jr , Paul W

    2014-01-01

    The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...

  4. Tensor models, Kronecker coefficients and permutation centralizer algebras

    Science.gov (United States)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  5. Finite state model and compatibility theory - New analysis tools for permutation networks

    Science.gov (United States)

    Huang, S.-T.; Tripathi, S. K.

    1986-01-01

    A simple model to describe the fundamental operation theory of shuffle-exchange-type permutation networks, the finite permutation machine (FPM), is described, and theorems which transform the control matrix result to a continuous compatible vector result are developed. It is found that only 2n-1 shuffle exchange passes are necessary, and that 3n-3 passes are sufficient, to realize all permutations, reducing the sufficient number of passes by two from previous results. The flexibility of the approach is demonstrated by the description of a stack permutation machine (SPM) which can realize all permutations, and by showing that the FPM corresponding to the Benes (1965) network belongs to the SPM. The FPM corresponding to the network with two cascaded reverse-exchange networks is found to realize all permutations, and a simple mechanism to verify several equivalence relationships of various permutation networks is discussed.

  6. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  7. Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation.

    Science.gov (United States)

    Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N

    2015-01-01

    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  8. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    of such experiments is whether the observed redundancy gains can be explained by parallel processing of the two stimuli in a race-like fashion. To test the parallel processing model, Miller derived the well-known race model inequality which has become a routine test for behavioral data in experiments with redundant...... signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...... of the test is reasonable even for small samples. The scripts discussed in this article may be downloaded as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental....

  9. Finite state model and compatibility theory: New analysis tools for permutation networks

    Energy Technology Data Exchange (ETDEWEB)

    Huang, S.T.; Fripathi, S.K.

    1986-07-01

    In this paper, the authors present a new model, finite permutation machine (FPM), to describe the permutation networks. A set of theorems are developed to capture the theory of operations for the permutation networks. Using this new framework, an interesting problem is attacked: are 2n-1 passes of shuffle exchange necessary and sufficient to realize all permutations. where n=log/sub 2/N and N is the number of inputs and outputs interconnected by the network. They prove that to realize all permutations, 2n-1 passes of shuffle exchange are necessary and that 3n-3 passes are sufficient. This reduces the sufficient number of passes by 2 from the best-known result. Benes network is the most well-known network that can realize all permutations. To show the flexibility of the approach, the authors describe a general class of FPM, stack permutation machine (SPM), which can realize all permutations, and show that FPM corresponding to Benes network belongs to SPM. They also show that FPM corresponding to the network with 2 cascaded reverse-exchange networks can realize all permutations. To show the simplicity of the approach, they also present a very simple mechanism to verify several equivalence relationships of various permutation networks.

  10. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  11. Applying Permutation Tests and Multivariate Modification Indices to Configurally Invariant Models That Need Respecification

    Directory of Open Access Journals (Sweden)

    Terrence D. Jorgensen

    2017-08-01

    Full Text Available The assumption of equivalence between measurement-model configurations across groups is typically investigated by evaluating overall fit of the same model simultaneously to multiple samples. However, the null hypothesis (H0 of configural invariance is distinct from the H0 of overall model fit. Permutation tests of configural invariance yield nominal Type I error rates even when a model does not fit perfectly (Jorgensen et al., 2017, in press. When the configural model requires modification, lack of evidence against configural invariance implies that researchers should reconsider their model's structure simultaneously across all groups. Application of multivariate modification indices is therefore proposed to help decide which parameter(s to free simultaneously in all groups, and I present Monte Carlo simulation results comparing their Type I error control to traditional 1-df modification indices. I use the Holzinger and Swineford (1939 data set to illustrate these methods.

  12. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    Science.gov (United States)

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  14. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  15. Encoding Sequential Information in Vector Space Models of Semantics: Comparing Holographic Reduced Representation and Random Permutation

    OpenAIRE

    Recchia, Gabriel; Jones, Michael; Sahlgren, Magnus; Kanerva, Pentti

    2010-01-01

    Encoding information about the order in which words typically appear has been shown to improve the performance of high-dimensional semantic space models. This requires an encoding operation capable of binding together vectors in an order-sensitive way, and efficient enough to scale to large text corpora. Although both circular convolution and random permutations have been enlisted for this purpose in semantic models, these operations have never been systematically compared. In Experiment 1 we...

  16. Some remarks on permutation type tests in linear models

    Czech Academy of Sciences Publication Activity Database

    Hušková, Marie; Picek, J.

    2004-01-01

    Roč. 24, č. 1 (2004), s. 151-181 R&D Projects: GA ČR GA201/03/0945; GA ČR GA201/02/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : hypotheses testing * linear regression models * Ll- and L2- procedures Subject RIV: BB - Applied Statistics , Operational Research

  17. Breaking of the overall permutation symmetry in nonlinear optical susceptibilities of one-dimensional periodic dimerized Huckel model

    OpenAIRE

    Xu, Minzhong; Jiang, Shidong

    2005-01-01

    Based on infinite one-dimensional single-electron periodic models of trans-polyacetylene, we show analytically that the overall permutation symmetry of nonlinear optical susceptibilities is, albeit preserved in the molecular systems with only bound states, no longer generally held for the periodic systems. The overall permutation symmetry breakdown provides a fairly natural explanation to the widely observed large deviations of Kleinman symmetry for periodic systems in off-resonant regions. P...

  18. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  19. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    OpenAIRE

    Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.

    2015-01-01

    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Perform...

  20. A new permutation-based method for assessing agreement between two observers making replicated binary readings.

    Science.gov (United States)

    Pan, Yi; Haber, Michael; Barnhart, Huiman X

    2011-04-15

    A new coefficient for assessing agreement between two observers using a permutation-based method is introduced in this article. When observations are binary, this coefficient compares the observed disagreement (probability of discordance) between the observers with its expected value under the hypothesis of individual equivalence. This hypothesis states that for each subject, the conditional distributions of the readings of the two observers are identical, and therefore from a statistical viewpoint it does not matter which observer makes the reading on this subject. Let K and L denote the numbers of replicated observations that are available from observers X and Y, respectively, on a given subject. Then the expected disagreement under individual equivalence for a subject is based on the (K + L) choosing K possible assignments of X's and Y's to the K + L observations made on this subject. Simple methods for the estimation of the new coefficient and its standard error are derived. The new coefficient is compared with kappa and the coefficient of individual agreement, which is based on comparing the inter and intra observer disagreements. Simulation studies confirm the validity of the estimated coefficient and its standard error. Data from a study involving the evaluation of mammograms by 10 radiologists are used to illustrate this new approach to the evaluation of observer agreement. Copyright © 2010 John Wiley & Sons, Ltd.

  1. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  2. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    ... in a combinatorial sense the distribution of the ranks is obtained via its generating function. The formulas are defined recursively to speed up computations using the computer algebra system Mathematica. Key words: Partitions, generating functions, combinatorics, permutation test, exact tests, computer algebra, k-sample, ...

  3. Permutation groups

    CERN Document Server

    Passman, Donald S

    2012-01-01

    This volume by a prominent authority on permutation groups consists of lecture notes that provide a self-contained account of distinct classification theorems. A ready source of frequently quoted but usually inaccessible theorems, it is ideally suited for professional group theorists as well as students with a solid background in modern algebra.The three-part treatment begins with an introductory chapter and advances to an economical development of the tools of basic group theory, including group extensions, transfer theorems, and group representations and characters. The final chapter feature

  4. Permutation-based variance component test in generalized linear mixed model with application to multilocus genetic association study.

    Science.gov (United States)

    Zeng, Ping; Zhao, Yang; Li, Hongliang; Wang, Ting; Chen, Feng

    2015-04-22

    In many medical studies the likelihood ratio test (LRT) has been widely applied to examine whether the random effects variance component is zero within the mixed effects models framework; whereas little work about likelihood-ratio based variance component test has been done in the generalized linear mixed models (GLMM), where the response is discrete and the log-likelihood cannot be computed exactly. Before applying the LRT for variance component in GLMM, several difficulties need to be overcome, including the computation of the log-likelihood, the parameter estimation and the derivation of the null distribution for the LRT statistic. To overcome these problems, in this paper we make use of the penalized quasi-likelihood algorithm and calculate the LRT statistic based on the resulting working response and the quasi-likelihood. The permutation procedure is used to obtain the null distribution of the LRT statistic. We evaluate the permutation-based LRT via simulations and compare it with the score-based variance component test and the tests based on the mixture of chi-square distributions. Finally we apply the permutation-based LRT to multilocus association analysis in the case-control study, where the problem can be investigated under the framework of logistic mixed effects model. The simulations show that the permutation-based LRT can effectively control the type I error rate, while the score test is sometimes slightly conservative and the tests based on mixtures cannot maintain the type I error rate. Our studies also show that the permutation-based LRT has higher power than these existing tests and still maintains a reasonably high power even when the random effects do not follow a normal distribution. The application to GAW17 data also demonstrates that the proposed LRT has a higher probability to identify the association signals than the score test and the tests based on mixtures. In the present paper the permutation-based LRT was developed for variance

  5. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  6. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  7. Genome Reshuffling for Advanced Intercross Permutation (GRAIP: simulation and permutation for advanced intercross population analysis.

    Directory of Open Access Journals (Sweden)

    Jeremy L Peirce

    2008-04-01

    Full Text Available Advanced intercross lines (AIL are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by naïve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with naïve mapping approaches in AIL populations is that the individual is not an exchangeable unit.The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with naïve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses. A naïve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a.GRAIP determines appropriate genome-wide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. The effect of family structure has immediate implications for the

  8. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): simulation and permutation for advanced intercross population analysis.

    Science.gov (United States)

    Peirce, Jeremy L; Broman, Karl W; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David C; Birmingham, Amanda E; Williams, Robert W

    2008-04-23

    Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by naïve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with naïve mapping approaches in AIL populations is that the individual is not an exchangeable unit. The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with naïve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A naïve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. GRAIP determines appropriate genome-wide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. The effect of family structure has immediate implications for the optimal AIL

  9. The effect of alternative permutation testing strategies on the performance of multifactor dimensionality reduction

    Directory of Open Access Journals (Sweden)

    Motsinger-Reif Alison A

    2008-12-01

    Full Text Available Abstract Background Multifactor Dimensionality Reduction (MDR is a novel method developed to detect gene-gene interactions in case-control association analysis by exhaustively searching multi-locus combinations. While the end-goal of analysis is hypothesis generation, significance testing is employed to indicate statistical interest in a resulting model. Because the underlying distribution for the null hypothesis of no association is unknown, non-parametric permutation testing is used. Lately, there has been more emphasis on selecting all statistically significant models at the end of MDR analysis in order to avoid missing a true signal. This approach opens up questions about the permutation testing procedure. Traditionally omnibus permutation testing is used, where one permutation distribution is generated for all models. An alternative is n-locus permutation testing, where a separate distribution is created for each n-level of interaction tested. Findings In this study, we show that the false positive rate for the MDR method is at or below a selected alpha level, and demonstrate the conservative nature of omnibus testing. We compare the power and false positive rates of both permutation approaches and find omnibus permutation testing optimal for preserving power while protecting against false positives. Conclusion Omnibus permutation testing should be used with the MDR method.

  10. The effect of alternative permutation testing strategies on the performance of multifactor dimensionality reduction

    Science.gov (United States)

    Motsinger-Reif, Alison A

    2008-01-01

    Background Multifactor Dimensionality Reduction (MDR) is a novel method developed to detect gene-gene interactions in case-control association analysis by exhaustively searching multi-locus combinations. While the end-goal of analysis is hypothesis generation, significance testing is employed to indicate statistical interest in a resulting model. Because the underlying distribution for the null hypothesis of no association is unknown, non-parametric permutation testing is used. Lately, there has been more emphasis on selecting all statistically significant models at the end of MDR analysis in order to avoid missing a true signal. This approach opens up questions about the permutation testing procedure. Traditionally omnibus permutation testing is used, where one permutation distribution is generated for all models. An alternative is n-locus permutation testing, where a separate distribution is created for each n-level of interaction tested. Findings In this study, we show that the false positive rate for the MDR method is at or below a selected alpha level, and demonstrate the conservative nature of omnibus testing. We compare the power and false positive rates of both permutation approaches and find omnibus permutation testing optimal for preserving power while protecting against false positives. Conclusion Omnibus permutation testing should be used with the MDR method. PMID:19116021

  11. The effect of alternative permutation testing strategies on the performance of multifactor dimensionality reduction.

    Science.gov (United States)

    Motsinger-Reif, Alison A

    2008-12-30

    Multifactor Dimensionality Reduction (MDR) is a novel method developed to detect gene-gene interactions in case-control association analysis by exhaustively searching multi-locus combinations. While the end-goal of analysis is hypothesis generation, significance testing is employed to indicate statistical interest in a resulting model. Because the underlying distribution for the null hypothesis of no association is unknown, non-parametric permutation testing is used. Lately, there has been more emphasis on selecting all statistically significant models at the end of MDR analysis in order to avoid missing a true signal. This approach opens up questions about the permutation testing procedure. Traditionally omnibus permutation testing is used, where one permutation distribution is generated for all models. An alternative is n-locus permutation testing, where a separate distribution is created for each n-level of interaction tested. In this study, we show that the false positive rate for the MDR method is at or below a selected alpha level, and demonstrate the conservative nature of omnibus testing. We compare the power and false positive rates of both permutation approaches and find omnibus permutation testing optimal for preserving power while protecting against false positives. Omnibus permutation testing should be used with the MDR method.

  12. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  13. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti......Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...... likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex...

  14. Permutation Complexity in Dynamical Systems

    CERN Document Server

    Amigo, Jose

    2010-01-01

    The study of permutation complexity can be envisioned as a new kind of symbolic dynamics whose basic blocks are ordinal patterns, that is, permutations defined by the order relations among points in the orbits of dynamical systems. Since its inception in 2002 the concept of permutation entropy has sparked a new branch of research in particular regarding the time series analysis of dynamical systems that capitalizes on the order structure of the state space. Indeed, on one hand ordinal patterns and periodic points are closely related, yet ordinal patterns are amenable to numerical methods, while periodicity is not. Another interesting feature is that since it can be shown that random (unconstrained) dynamics has no forbidden patterns with probability one, their existence can be used as a fingerprint to identify any deterministic origin of orbit generation. This book is primarily addressed to researchers working in the field of nonlinear dynamics and complex systems, yet will also be suitable for graduate stude...

  15. Combinatorics of permutations

    CERN Document Server

    Bona, Miklos

    2012-01-01

    A Unified Account of Permutations in Modern Combinatorics A 2006 CHOICE Outstanding Academic Title, the first edition of this bestseller was lauded for its detailed yet engaging treatment of permutations. Providing more than enough material for a one-semester course, Combinatorics of Permutations, Second Edition continues to clearly show the usefulness of this subject for both students and researchers. Expanded Chapters Much of the book has been significantly revised and extended. This edition includes a new section on alternating permutations and new material on multivariate applications of t

  16. Permutation and parametric tests for effect sizes in voxel-based morphometry of gray matter volume in brain structural MRI.

    Science.gov (United States)

    Dickie, David A; Mikhael, Shadia; Job, Dominic E; Wardlaw, Joanna M; Laidlaw, David H; Bastin, Mark E

    2015-12-01

    Permutation testing has been widely implemented in voxel-based morphometry (VBM) tools. However, this type of non-parametric inference has yet to be thoroughly compared with traditional parametric inference in VBM studies of brain structure. Here we compare both types of inference and investigate what influence the number of permutations in permutation testing has on results in an exemplar study of how gray matter proportion changes with age in a group of working age adults. High resolution T1-weighted volume scans were acquired from 80 healthy adults aged 25-64years. Using a validated VBM procedure and voxel-based permutation testing for Pearson product-moment coefficient, the effect sizes of changes in gray matter proportion with age were assessed using traditional parametric and permutation testing inference with 100, 500, 1000, 5000, 10000 and 20000 permutations. The statistical significance was set at Pparametric inference (N=3221voxels). Permutation testing with 10000 (N=6251voxels) and 20000 (N=6233voxels) permutations produced clusters that were generally consistent with each other. However, with 1000 permutations there were approximately 20% more statistically significant voxels (N=7117voxels) than with ≥10000 permutations. Permutation testing inference may provide a more sensitive method than traditional parametric inference for identifying age-related differences in gray matter proportion. Based on the results reported here, at least 10000 permutations should be used in future univariate VBM studies investigating age related changes in gray matter to avoid potential false findings. Additional studies using permutation testing in large imaging databanks are required to address the impact of model complexity, multivariate analysis, number of observations, sampling bias and data quality on the accuracy with which subtle differences in brain structure associated with normal aging can be identified. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  18. Development of isothermal-isobaric replica-permutation method for molecular dynamics and Monte Carlo simulations and its application to reveal temperature and pressure dependence of folded, misfolded, and unfolded states of chignolin

    Science.gov (United States)

    Yamauchi, Masataka; Okumura, Hisashi

    2017-11-01

    We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.

  19. A Novel Method of Fault Diagnosis for Rolling Bearing Based on Dual Tree Complex Wavelet Packet Transform and Improved Multiscale Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Guiji Tang

    2016-01-01

    Full Text Available A novel method of fault diagnosis for rolling bearing, which combines the dual tree complex wavelet packet transform (DTCWPT, the improved multiscale permutation entropy (IMPE, and the linear local tangent space alignment (LLTSA with the extreme learning machine (ELM, is put forward in this paper. In this method, in order to effectively discover the underlying feature information, DTCWPT, which has the attractive properties as nearly shift invariance and reduced aliasing, is firstly utilized to decompose the original signal into a set of subband signals. Then, IMPE, which is designed to reduce the variability of entropy measures, is applied to characterize the properties of each obtained subband signal at different scales. Furthermore, the feature vectors are constructed by combining IMPE of each subband signal. After the feature vectors construction, LLTSA is employed to compress the high dimensional vectors of the training and the testing samples into the low dimensional vectors with better distinguishability. Finally, the ELM classifier is used to automatically accomplish the condition identification with the low dimensional feature vectors. The experimental data analysis results validate the effectiveness of the presented diagnosis method and demonstrate that this method can be applied to distinguish the different fault types and fault degrees of rolling bearings.

  20. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  1. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  2. Markov Chains on Orbits of Permutation Groups

    OpenAIRE

    Niepert, Mathias

    2014-01-01

    We present a novel approach to detecting and utilizing symmetries in probabilistic graphical models with two main contributions. First, we present a scalable approach to computing generating sets of permutation groups representing the symmetries of graphical models. Second, we introduce orbital Markov chains, a novel family of Markov chains leveraging model symmetries to reduce mixing times. We establish an insightful connection between model symmetries and rapid mixing of orbital Markov chai...

  3. A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro

    2016-01-01

    Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation

  4. Secure physical layer using dynamic permutations in cognitive OFDMA systems

    DEFF Research Database (Denmark)

    Meucci, F.; Wardana, Satya Ardhy; Prasad, Neeli R.

    2009-01-01

    of the permutations are analyzed for several DSA patterns. Simulations are performed according to the parameters of the IEEE 802.16e system model. The securing mechanism proposed provides intrinsic PHY layer security and it can be easily implemented in the current IEEE 802.16 standard applying almost negligible......This paper proposes a novel lightweight mechanism for a secure Physical (PHY) layer in Cognitive Radio Network (CRN) using Orthogonal Frequency Division Multiplexing (OFDM). User's data symbols are mapped over the physical subcarriers with a permutation formula. The PHY layer is secured...... with a random and dynamic subcarrier permutation which is based on a single pre-shared information and depends on Dynamic Spectrum Access (DSA). The dynamic subcarrier permutation is varying over time, geographical location and environment status, resulting in a very robust protection that ensures...

  5. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  6. Bernoulli trials and permutation statistics

    Directory of Open Access Journals (Sweden)

    Don Rawlings

    1992-01-01

    Full Text Available Several coin-tossing games are surveyed which, in a natural way, give rise to “statistically” induced probability measures on the set of permutations of {1,2,…,n} and on sets of multipermutations. The distributions of a general class of random variables known as binary tree statistics are also given.

  7. Inference for Distributions over the Permutation Group

    National Research Council Canada - National Science Library

    Huang, Jonathan; Guestrin, Carlos; Guibas, Leonidas

    2008-01-01

    ..., cannot capture the mutual exclusivity constraints associated with permutations. In this paper, we use the "low-frequency" terms of a Fourier decomposition to represent distributions over permutations compactly...

  8. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  9. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    Science.gov (United States)

    Xu, Kaixuan; Wang, Jun

    2017-02-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model.

  10. A Permutation Encoding Technique Applied to Genetic Algorithm ...

    African Journals Online (AJOL)

    In this paper, a permutation chromosome encoding scheme is proposed for obtaining solution to resource constrained project scheduling problem. The proposed chromosome coding method is applied to Genetic algorithm procedure and implemented through object oriented programming. The method is applied to a ...

  11. Introduction to Permutation and Resampling-Based Hypothesis Tests

    Science.gov (United States)

    LaFleur, Bonnie J.; Greevy, Robert A.

    2009-01-01

    A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…

  12. Tensor Permutation Matrices in Finite Dimensions

    OpenAIRE

    Christian, Rakotonirina

    2005-01-01

    We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...

  13. Some equinumerous pattern-avoiding classes of permutations

    Directory of Open Access Journals (Sweden)

    M. D. Atkinson

    2005-12-01

    Full Text Available Suppose that p,q,r,s are non-negative integers with m=p+q+r+s. The class X(p,q,r,s of permutations that contain no pattern of the form αβγ where |α|=r, |γ|=s and β is any arrangement of {1,2,…,p}∪{m-q+1, m-q+2, …,m} is considered. A recurrence relation to enumerate the permutations of X(p,q,r,s is established. The method of proof also shows that X(p,q,r,s=X(p,q,1,0X(1,0,r,s in the sense of permutational composition. 2000 MATHEMATICS SUBJECT CLASSIFICATION: 05A05

  14. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  15. Permutation parity machines for neural cryptography.

    Science.gov (United States)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  16. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  17. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    Science.gov (United States)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  18. Permutations as a means to encode order in word space

    OpenAIRE

    Sahlgren, Magnus; Holst, Anders; Kanerva, Pentti

    2008-01-01

    We show that sequence information can be encoded into high-dimensional fixed-width vectors using permutations of coordinates. Computational models of language often represent words with high-dimensional semantic vectors compiled from word-use statistics. A word's semantic vector usually encodes the contexts in which the word appears in a large body of text but ignores word order. However, word order often signals a word's grammatical role in a sentence and thus tells of the word's meaning. Jo...

  19. Successful attack on permutation-parity-machine-based neural cryptography.

    Science.gov (United States)

    Seoane, Luís F; Ruttor, Andreas

    2012-02-01

    An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.

  20. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    Science.gov (United States)

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  2. 1-Colored Archetypal Permutations and Strings of Degree n

    Directory of Open Access Journals (Sweden)

    Gheorghe Eduard Tara

    2012-10-01

    Full Text Available New notions related to permutations are introduced here. We present the string of a 1-colored permutation as a closed planar curve, the fundamental 1-colored permutation as an equivalence class related to the equivalence in strings of the 1-colored permutations. We give formulas for the number of the 1-colored archetypal permutations of degree n. We establish an algorithm to identify the 1- colored archetypal permutations of degree n and we present the atlas of the 1-colored archetypal strings of degree n, n ≤ 7, based on this algorithm.

  3. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    International Nuclear Information System (INIS)

    Xu, Kaixuan; Wang, Jun

    2017-01-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  4. N ecklaces~ Periodic Points and Permutation Representations

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 11. Necklaces, Periodic Points and Permutation Representations - Fermat's Little Theorem. Somnath Basu Anindita Bose Sumit Kumar Sinha Pankaj Vishe. General Article Volume 6 Issue 11 November 2001 pp 18-26 ...

  5. Defects and permutation branes in the Liouville field theory

    DEFF Research Database (Denmark)

    Sarkissian, Gor

    2009-01-01

    The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families....

  6. A SAS/IML algorithm for an exact permutation test

    Directory of Open Access Journals (Sweden)

    Neuhäuser, Markus

    2009-03-01

    Full Text Available An algorithm written in SAS/IML is presented that can perform an exact permutation test for a two-sample comparison. All possible permutations are considered. The Baumgartner-Weiß-Schindler statistic is exemplarily used as the test statistic for the permutation test.

  7. Permutation Entropy: New Ideas and Challenges

    Directory of Open Access Journals (Sweden)

    Karsten Keller

    2017-03-01

    Full Text Available Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data.

  8. Inverse halftoning using binary permutation filters.

    Science.gov (United States)

    Kim, Y T; Arce, G R; Grabowski, N

    1995-01-01

    The problem of reconstructing a continuous-tone image given its ordered dithered halftone or its error-diffused halftone image is considered. We develop a modular class of nonlinear filters that can reconstruct the continuous-tone information preserving image details and edges that provide important visual cues. The proposed nonlinear reconstruction algorithms, denoted as binary permutation filters, are based on the space and rank orderings of the halftone samples provided by the multiset permutation of the "on" pixels in a halftone observation window. For a given window size, we obtain a wide range of filters by varying the amount of space-rank ordering information utilized in the estimate. For image reconstructions from ordered dithered halftones, we develop periodically space-varying filters that can account for the periodical nature of the underlying screening process. A class of suboptimal but simpler space-invariant reconstruction filters are also proposed and tested. Constrained LMS type algorithms are employed for the design of reconstruction filters that minimize the reconstruction mean squared error. We present simulations showing that binary permutation filters are modular, robust to image source characteristics, and that they produce high visual quality image reconstruction.

  9. Le nombre de permutations dans les tests permutationnels; The number of permutations in permutation tests

    Directory of Open Access Journals (Sweden)

    Louis Laurencelle

    2012-10-01

    Full Text Available In a first part, the concepts and theory of exact randomization tests are reviewed, together with their implementation for each of a number of customary test situations including simple anova designs. Approximate (or incomplete randomization tests are considered in the second part, as manageable alternatives to exact tests. We propose a model to calculate the relative power of approximate randomization tests and sketch out some guidelines for the user.

  10. Correction for multiplicity in genetic association studies of triads: the permutational TDT.

    Science.gov (United States)

    Troendle, James F; Mills, James L

    2011-03-01

    New technology for large-scale genotyping has created new challenges for statistical analysis. Correcting for multiple comparison without discarding true positive results and extending methods to triad studies are among the important problems facing statisticians. We present a one-sample permutation test for testing transmission disequilibrium hypotheses in triad studies, and show how this test can be used for multiple single nucleotide polymorphism (SNP) testing. The resulting multiple comparison procedure is shown in the case of the transmission disequilibrium test to control the familywise error. Furthermore, this procedure can handle multiple possible modes of risk inheritance per SNP. The resulting permutational procedure is shown through simulation of SNP data to be more powerful than the Bonferroni procedure when the SNPs are in linkage disequilibrium. Moreover, permutations implicitly avoid any multiple comparison correction penalties when the SNP has a rare allele. The method is illustrated by analyzing a large candidate gene study of neural tube defects and an independent study of oral clefts, where the smallest adjusted p-values using the permutation procedure are approximately half those of the Bonferroni procedure. We conclude that permutation tests are more powerful for identifying disease-associated SNPs in candidate gene studies and are useful for analysis of triad studies. No claim to original US government works Annals of Human Genetics © 2010 Blackwell Publishing Ltd/University College London.

  11. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  12. Bearing Fault Diagnosis Based on Multiscale Permutation Entropy and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jian-Jiun Ding

    2012-07-01

    Full Text Available Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, multiscale permutation entropy (MPE was introduced for feature extraction from faulty bearing vibration signals. After extracting feature vectors by MPE, the support vector machine (SVM was applied to automate the fault diagnosis procedure. Simulation results demonstrated that the proposed method is a very powerful algorithm for bearing fault diagnosis and has much better performance than the methods based on single scale permutation entropy (PE and multiscale entropy (MSE.

  13. Permutation Entropy for Random Binary Sequences

    Directory of Open Access Journals (Sweden)

    Lingfeng Liu

    2015-12-01

    Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.

  14. Magic informationally complete POVMs with permutations.

    Science.gov (United States)

    Planat, Michel; Gedik, Zafer

    2017-09-01

    Eigenstates of permutation gates are either stabilizer states (for gates in the Pauli group) or magic states, thus allowing universal quantum computation (Planat, Rukhsan-Ul-Haq 2017 Adv. Math. Phys. 2017, 5287862 (doi:10.1155/2017/5287862)). We show in this paper that a subset of such magic states, when acting on the generalized Pauli group, define (asymmetric) informationally complete POVMs. Such informationally complete POVMs, investigated in dimensions 2-12, exhibit simple finite geometries in their projector products and, for dimensions 4 and 8 and 9, relate to two-qubit, three-qubit and two-qutrit contextuality.

  15. Weight Distributions for Turbo Codes Using Random and Nonrandom Permutations

    Science.gov (United States)

    Dolinar, S.; Divsalar, D.

    1995-04-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of the constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as p 2N, where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are "semirandom" permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  16. Ordered groups and infinite permutation groups

    CERN Document Server

    1996-01-01

    The subjects of ordered groups and of infinite permutation groups have long en­ joyed a symbiotic relationship. Although the two subjects come from very different sources, they have in certain ways come together, and each has derived considerable benefit from the other. My own personal contact with this interaction began in 1961. I had done Ph. D. work on sequence convergence in totally ordered groups under the direction of Paul Conrad. In the process, I had encountered "pseudo-convergent" sequences in an ordered group G, which are like Cauchy sequences, except that the differences be­ tween terms of large index approach not 0 but a convex subgroup G of G. If G is normal, then such sequences are conveniently described as Cauchy sequences in the quotient ordered group GIG. If G is not normal, of course GIG has no group structure, though it is still a totally ordered set. The best that can be said is that the elements of G permute GIG in an order-preserving fashion. In independent investigations around that t...

  17. The coupling analysis between stock market indices based on permutation measures

    Science.gov (United States)

    Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung

    2016-04-01

    Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  18. Permutation entropy with vector embedding delays

    Science.gov (United States)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  19. Minimal degrees of faithful quasi-permutation representations for ...

    Indian Academy of Sciences (India)

    In [2], the algorithms of c ( G ) , q ( G ) and p ( G ) , the minimal degrees of faithful quasi-permutation and permutation representations of a finite group are given. The main purpose of this paper is to consider the relationship between these minimal degrees of non-trivial -groups and with the group × .

  20. The Magic of Universal Quantum Computing with Permutations

    Directory of Open Access Journals (Sweden)

    Michel Planat

    2017-01-01

    Full Text Available The role of permutation gates for universal quantum computing is investigated. The “magic” of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function, and state-dependent contextuality (following many contributions on this subject. A first classification of a few types of resulting magic states in low dimensions d≤9 is performed.

  1. The undergeneration of permutation invariance as a criterion for logicality

    NARCIS (Netherlands)

    Dutilh Novaes, Catarina

    2014-01-01

    Permutation invariance is often presented as the correct criterion for logicality. The basic idea is that one can demarcate the realm of logic by isolating specific entities—logical notions or constants—and that permutation invariance would provide a philosophically motivated and technically

  2. Codes related to line graphs of triangular graphs and permutation ...

    African Journals Online (AJOL)

    For any prime p, we consider p-ary linear codes obtained from the row span of incidence matrices of line graphs of triangular graphs and adjacency matrices of their line graphs. We determine parameters of the codes, their automorphism groups and exhibit permutation decoding sets (PD-sets) for partial permutation ...

  3. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  4. EPEPT: A web service for enhanced P-value estimation in permutation tests

    Directory of Open Access Journals (Sweden)

    Knijnenburg Theo A

    2011-10-01

    Full Text Available Abstract Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values 1. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/

  5. Multiscale permutation entropy analysis of electrocardiogram

    Science.gov (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao

    2017-04-01

    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  6. Permutation representations of the orbits of the automorphism group ...

    Indian Academy of Sciences (India)

    Finite Abelian groups; finite group actions; automorphism orbits; modules over discrete valuation rings; endomorphism algebras; permutation representations; ... Manuscript revised: 28 January 2016; Accepted: 29 January 2016; Early published: Unedited version published online: Final version published online: 15 April ...

  7. Minimal degrees of faithful quasi-permutation representations for ...

    Indian Academy of Sciences (India)

    Abstract. In [2], the algorithms of c(G), q(G) and p(G), the minimal degrees of faithful quasi-permutation and permutation representations of a finite group G are given. The main purpose of this paper is to consider the relationship between these minimal degrees of non-trivial p-groups H and K with the group H × K. Keywords.

  8. Rolling Bearing Fault Diagnosis Based on ELCD Permutation Entropy and RVM

    Directory of Open Access Journals (Sweden)

    Jiang Xingmeng

    2016-01-01

    Full Text Available Aiming at the nonstationary characteristic of a gear fault vibration signal, a recognition method based on permutation entropy of ensemble local characteristic-scale decomposition (ELCD and relevance vector machine (RVM is proposed. First, the vibration signal was decomposed by ELCD; then a series of intrinsic scale components (ISCs were obtained. Second, according to the kurtosis of ISCs, principal ISCs were selected and then the permutation entropy of principal ISCs was calculated and they were combined into a feature vector. Finally, the feature vectors were input in RVM classifier to train and test and identify the type of rolling bearing faults. Experimental results show that this method can effectively diagnose four kinds of working condition, and the effect is better than local characteristic-scale decomposition (LCD method.

  9. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  10. A permutation testing framework to compare groups of brain networks

    Directory of Open Access Journals (Sweden)

    Sean L Simpson

    2013-11-01

    Full Text Available Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.

  11. A better alternative to stratified permuted block design for subject randomization in clinical trials.

    Science.gov (United States)

    Zhao, Wenle

    2014-12-30

    Stratified permuted block randomization has been the dominant covariate-adaptive randomization procedure in clinical trials for several decades. Its high probability of deterministic assignment and low capacity of covariate balancing have been well recognized. The popularity of this sub-optimal method is largely due to its simplicity in implementation and the lack of better alternatives. Proposed in this paper is a two-stage covariate-adaptive randomization procedure that uses the block urn design or the big stick design in stage one to restrict the treatment imbalance within each covariate stratum, and uses the biased-coin minimization method in stage two to control imbalances in the distribution of additional covariates that are not included in the stratification algorithm. Analytical and simulation results show that the new randomization procedure significantly reduces the probability of deterministic assignments, and improve the covariate balancing capacity when compared to the traditional stratified permuted block randomization. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Predecessor and permutation existence problems for sequential dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Rosenkrantz, D. J. (Daniel J.); Stearns, R. E. (Richard E.)

    2002-01-01

    A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was introduced in BMR99, BR991 as a formal model for analyzing simulation systems. An SDS S is a triple (G, F,n ),w here (i) G(V,E ) is an undirected graph with n nodes with each node having a state, (ii) F = (fi, fi, . . ., fn), with fi denoting a function associated with node ui E V and (iii) A is a permutation of (or total order on) the nodes in V, A configuration of an SDS is an n-vector ( b l, bz, . . ., bn), where bi is the value of the state of node vi. A single SDS transition from one configuration to another is obtained by updating the states of the nodes by evaluating the function associated with each of them in the order given by n. Here, we address the complexity of two basic problems and their generalizations for SDSs. Given an SDS S and a configuration C, the PREDECESSOR EXISTENCE (or PRE) problem is to determine whether there is a configuration C' such that S has a transition from C' to C. (If C has no predecessor, C is known as a garden of Eden configuration.) Our results provide separations between efficiently solvable and computationally intractable instances of the PRE problem. For example, we show that the PRE problem can be solved efficiently for SDSs with Boolean state values when the node functions are symmetric and the underlying graph is of bounded treewidth. In contrast, we show that allowing just one non-symmetric node function renders the problem NP-complete even when the underlying graph is a tree (which has a treewidth of 1). We also show that the PRE problem is efficiently solvable for SDSs whose state values are from a field and whose node functions are linear. Some of the polynomial algorithms also extend to the case where we want to find an ancestor configuration that precedes a given configuration by a logarithmic number of steps. Our results extend some of the earlier results by Sutner [Su95] and Green [@87] on the complexity of

  13. Permutation entropy based speckle analysis in metal cutting

    Science.gov (United States)

    Nair, Usha; Krishna, Bindu M.; Namboothiri, V. N. N.; Nampoori, V. P. N.

    2008-08-01

    for regular, chaotic, noisy or reality-based signals. PE work efficiently well even in the presence of dynamical and/or observational noise. Unlike other nonlinear techniques PE is easier and faster to calculate as the reconstruction of the state space from time series is not required. Increasing value of PE indicates increase in complexity of the system dynamics. PE of the time series is calculated using a one-sample shift sliding window technique. PE of order n>=2 is calculated from Shanon entropy where the sum runs over all n! permutations of order n. PE gives the information contained in comparing n consecutive values of the time series. The calculation of PE is fast and robust in nature. Under situations where the data sets are huge and there is no time for preprocessing and fine-tuning, PE can effectively detect dynamical changes of the system. This makes PE an ideal choice for online detection of chatter, which is not possible with other conventional methods.

  14. Permutation tests in the two-sample problem for functional data

    OpenAIRE

    Cabaña, Alejandra; Estrada, Ana Maria; Peña, Jairo I.; Quiroz, Adolfo J.

    2016-01-01

    Three different permutation test schemes are discussed and compared in the context of the two-sample problem for functional data. One of the procedures was essentially introduced by Lopez-Pintado and Romo (2009), using notions of functional data depth to adapt the ideas originally proposed by Liu and Singh (1993) for multivariate data. Of the new methods introduced here, one is also based on functional data depths, but uses a different way (inspired by Meta-Analysis) to assess the significanc...

  15. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS

    Directory of Open Access Journals (Sweden)

    Moshen Kuai

    2018-03-01

    Full Text Available For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN Adaptive Neuro-fuzzy Inference System (ANFIS in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  16. Weighted multiscale Rényi permutation entropy of nonlinear time series

    Science.gov (United States)

    Chen, Shijian; Shang, Pengjian; Wu, Yue

    2018-04-01

    In this paper, based on Rényi permutation entropy (RPE), which has been recently suggested as a relative measure of complexity in nonlinear systems, we propose multiscale Rényi permutation entropy (MRPE) and weighted multiscale Rényi permutation entropy (WMRPE) to quantify the complexity of nonlinear time series over multiple time scales. First, we apply MPRE and WMPRE to the synthetic data and make a comparison of modified methods and RPE. Meanwhile, the influence of the change of parameters is discussed. Besides, we interpret the necessity of considering not only multiscale but also weight by taking the amplitude into account. Then MRPE and WMRPE methods are employed to the closing prices of financial stock markets from different areas. By observing the curves of WMRPE and analyzing the common statistics, stock markets are divided into 4 groups: (1) DJI, S&P500, and HSI, (2) NASDAQ and FTSE100, (3) DAX40 and CAC40, and (4) ShangZheng and ShenCheng. Results show that the standard deviations of weighted methods are smaller, showing WMRPE is able to ensure the results more robust. Besides, WMPRE can provide abundant dynamical properties of complex systems, and demonstrate the intrinsic mechanism.

  17. Determining the parity of a permutation using an experimental NMR qutrit

    International Nuclear Information System (INIS)

    Dogra, Shruti; Arvind,; Dorai, Kavita

    2014-01-01

    We present the NMR implementation of a recently proposed quantum algorithm to find the parity of a permutation. In the usual qubit model of quantum computation, it is widely believed that computational speedup requires the presence of entanglement and thus cannot be achieved by a single qubit. On the other hand, a qutrit is qualitatively more quantum than a qubit because of the existence of quantum contextuality and a single qutrit can be used for computing. We use the deuterium nucleus oriented in a liquid crystal as the experimental qutrit. This is the first experimental exploitation of a single qutrit to carry out a computational task. - Highlights: • NMR implementation of a quantum algorithm to determine the parity of a permutation. • Algorithm implemented on a single qutrit. • Computational speedup achieved without quantum entanglement. • Single qutrit shows quantum contextuality

  18. Information transmission and signal permutation in active flow networks

    Science.gov (United States)

    Woodhouse, Francis G.; Fawcett, Joanna B.; Dunkel, Jörn

    2018-03-01

    Recent experiments show that both natural and artificial microswimmers in narrow channel-like geometries will self-organise to form steady, directed flows. This suggests that networks of flowing active matter could function as novel autonomous microfluidic devices. However, little is known about how information propagates through these far-from-equilibrium systems. Through a mathematical analogy with spin-ice vertex models, we investigate here the input–output characteristics of generic incompressible active flow networks (AFNs). Our analysis shows that information transport through an AFN is inherently different from conventional pressure or voltage driven networks. Active flows on hexagonal arrays preserve input information over longer distances than their passive counterparts and are highly sensitive to bulk topological defects, whose presence can be inferred from marginal input–output distributions alone. This sensitivity further allows controlled permutations on parallel inputs, revealing an unexpected link between active matter and group theory that can guide new microfluidic mixing strategies facilitated by active matter and aid the design of generic autonomous information transport networks.

  19. Permuted tRNA genes of Cyanidioschyzon merolae, the origin of the tRNA molecule and the root of the Eukarya domain.

    Science.gov (United States)

    Di Giulio, Massimo

    2008-08-07

    An evolutionary analysis is conducted on the permuted tRNA genes of Cyanidioschyzon merolae, in which the 5' half of the tRNA molecule is codified at the 3' end of the gene and its 3' half is codified at the 5' end. This analysis has shown that permuted genes cannot be considered as derived traits but seem to possess characteristics that suggest they are ancestral traits, i.e. they originated when tRNA molecule genes originated for the first time. In particular, if the hypothesis that permuted genes are a derived trait were true, then we should not have been able to observe that the most frequent class of permuted genes is that of the anticodon loop type, for the simple reason that this class would derive by random permutation from a class of non-permuted tRNA genes, which instead is the rarest. This would not explain the high frequency with which permuted tRNA genes with perfectly separate 5' and 3' halves were observed. Clearly the mechanism that produced this class of permuted genes would envisage the existence, in an advanced stage of evolution, of minigenes codifying for the 5' and 3' halves of tRNAs which were assembled in a permuted way at the origin of the tRNA molecule, thus producing a high frequency of permuted genes of the class here referred. Therefore, this evidence supports the hypothesis that the genes of the tRNA molecule were assembled by minigenes codifying for hairpin-like RNA molecules, as suggested by one model for the origin of tRNA [Di Giulio, M., 1992. On the origin of the transfer RNA molecule. J. Theor. Biol. 159, 199-214; Di Giulio, M., 1999. The non-monophyletic origin of tRNA molecule. J. Theor. Biol. 197, 403-414]. Moreover, the late assembly of the permuted genes of C. merolae, as well as their ancestrality, strengthens the hypothesis of the polyphyletic origins of these genes. Finally, on the basis of the uniqueness and the ancestrality of these permuted genes, I suggest that the root of the Eukarya domain is in the super

  20. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  1. APE: Authenticated Permutation-Based Encryption for Lightweight Cryptography

    DEFF Research Database (Denmark)

    Andreeva, Elena; Bilgin, Begül; Bogdanov, Andrey

    2015-01-01

    The domain of lightweight cryptography focuses on cryptographic algorithms for extremely constrained devices. It is very costly to avoid nonce reuse in such environments, because this requires either a hardware source of randomness, or non-volatile memory to store a counter. At the same time, a lot...... of cryptographic schemes actually require the nonce assumption for their security. In this paper, we propose APE as the first permutation-based authenticated encryption scheme that is resistant against nonce misuse. We formally prove that APE is secure, based on the security of the underlying permutation......, and Spongent. For any of these permutations, an implementation that supports both encryption and decryption requires less than 1.9 kGE and 2.8 kGE for 80-bit and 128-bit security levels, respectively....

  2. Improved triglyceride transesterification by circular permuted Candida antarctica lipase B.

    Science.gov (United States)

    Yu, Ying; Lutz, Stefan

    2010-01-01

    Lipases represent a versatile class of biocatalysts with numerous potential applications in industry including the production of biodiesel via enzyme-catalyzed transesterification. In this article, we have investigated the performance of cp283, a variant of Candida antarctica lipase B (CALB) engineered by circular permutation, with a series of esters, as well as pure and complex triglycerides. In comparison with wild-type CALB, the permutated enzyme showed consistently higher catalytic activity (2.6- to 9-fold) for trans and interesterification of the different substrates with 1-butanol and ethyl acetate as acyl acceptors. Differences in the observed rates for wild-type CALB and cp283 are believe to be related to changes in the rate-determining step of the catalytic cycle as a result of circular permutation.

  3. The Descent Set and Connectivity Set of a Permutation

    Science.gov (United States)

    Stanley, Richard P.

    2005-08-01

    The descent set D(w) of a permutation w of 1,2,...,n is a standard and well-studied statistic. We introduce a new statistic, the connectivity set C(w), and show that it is a kind of dual object to D(w). The duality is stated in terms of the inverse of a matrix that records the joint distribution of D(w) and C(w). We also give a variation involving permutations of a multiset and a q-analogue that keeps track of the number of inversions of w.

  4. Permutation entropy of fractional Brownian motion and fractional Gaussian noise

    International Nuclear Information System (INIS)

    Zunino, L.; Perez, D.G.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2008-01-01

    We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays

  5. Accelerating permutation testing in voxel-wise analysis through subspace tracking: A new plugin for SnPM.

    Science.gov (United States)

    Gutierrez-Barragan, Felipe; Ithapu, Vamsi K; Hinrichs, Chris; Maumet, Camille; Johnson, Sterling C; Nichols, Thomas E; Singh, Vikas

    2017-10-01

    Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected p-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, T, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that T is low-rank plus a low-variance residual. This makes T a good candidate for low-rank matrix completion, where only a very small number of entries of T (∼0.35% of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (50≤n≤200), with speedups of 1.5× - 38× (vs. SnPM13) and 20x-1000× (vs. NaivePT). For larger datasets (n≥200) RapidPT outperforms NaivePT (6× - 200×) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2× - 15×) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Modulation of frustration in folding by sequence permutation

    Science.gov (United States)

    Nobrega, R. Paul; Arora, Karunesh; Kathuria, Sagar V.; Graceffa, Rita; Barrea, Raul A.; Guo, Liang; Chakravarthy, Srinivas; Bilsel, Osman; Irving, Thomas C.; Brooks, Charles L.; Matthews, C. Robert

    2014-01-01

    Folding of globular proteins can be envisioned as the contraction of a random coil unfolded state toward the native state on an energy surface rough with local minima trapping frustrated species. These substructures impede productive folding and can serve as nucleation sites for aggregation reactions. However, little is known about the relationship between frustration and its underlying sequence determinants. Chemotaxis response regulator Y (CheY), a 129-amino acid bacterial protein, has been shown previously to populate an off-pathway kinetic trap in the microsecond time range. The frustration has been ascribed to premature docking of the N- and C-terminal subdomains or, alternatively, to the formation of an unproductive local-in-sequence cluster of branched aliphatic side chains, isoleucine, leucine, and valine (ILV). The roles of the subdomains and ILV clusters in frustration were tested by altering the sequence connectivity using circular permutations. Surprisingly, the stability and buried surface area of the intermediate could be increased or decreased depending on the location of the termini. Comparison with the results of small-angle X-ray–scattering experiments and simulations points to the accelerated formation of a more compact, on-pathway species for the more stable intermediate. The effect of chain connectivity in modulating the structures and stabilities of the early kinetic traps in CheY is better understood in terms of the ILV cluster model. However, the subdomain model captures the requirement for an intact N-terminal domain to access the native conformation. Chain entropy and aliphatic-rich sequences play crucial roles in biasing the early events leading to frustration in the folding of CheY. PMID:25002512

  7. A new permutational behaviour of spin -3/2 states

    International Nuclear Information System (INIS)

    Jayaraman, J.; Nobre, M.A.S.

    1982-01-01

    A new permutational behaviour of spin -3/2 states under the symetric group S 3 defined solely on the spin -3/2 space is demonstrated. The transposition elements of S 3 are expressed succintly in terms of the squares of the spin -3/2 matrices. (Author) [pt

  8. The Query Complexity of Finding a Hidden Permutation

    DEFF Research Database (Denmark)

    Afshani, Peyman; Afrawal, Manindra; Benjamin, Doerr

    2012-01-01

    We study the query complexity of determining a hidden permutation. More specifically, we study the problem of learning a secret (z) consisting of a binary string z of length n and a permutation of [n]. The secret must be unveiled by asking queries x01n , and for each query asked, we are returned ...... applications in many other query complexity problems.......We study the query complexity of determining a hidden permutation. More specifically, we study the problem of learning a secret (z) consisting of a binary string z of length n and a permutation of [n]. The secret must be unveiled by asking queries x01n , and for each query asked, we are returned...... the score fz(x) defined as fz(x):=maxi[0n]ji:z(j)=x(j); i.e., the length of the longest common prefix of x and z with respect to . The goal is to minimize the number of queries asked. Our main result are matching upper and lower bounds for this problem, both for deterministic and randomized query schemes...

  9. A Fast Algorithm for Generating Permutation Distribution of Ranks in ...

    African Journals Online (AJOL)

    The algorithm is based on combinatorics in finding the generating function of the distribution of the ranks. This further gives insight into the permutation distribution of a rank statistics. The algorithm is implemented with the aid of the computer algebra system Mathematica. Key words: Combinatorics, generating function, ...

  10. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  11. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  12. Generalized composite multiscale permutation entropy and Laplacian score based rolling bearing fault diagnosis

    Science.gov (United States)

    Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng

    2018-01-01

    Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.

  13. Differential cryptanalysis of round-reduced PRINTcipher: Computing roots of permutations

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed; Leander, Gregor; Zenner, Erik

    2011-01-01

    permutations, we show in this paper that this is not the case. We present two differential attacks that successfully break about half of the rounds of PRINTcipher, thereby giving the first cryptanalytic result on the cipher. In addition, one of the attacks is of independent interest, since it uses a mechanism...... to compute roots of permutations. If an attacker knows the many-round permutation πr, the algorithm can be used to compute the underlying single-round permutation π. This technique is thus relevant for all iterative ciphers that deploy key-dependent permutations. In the case of PRINTcipher, it can be used...

  14. A Weak Quantum Blind Signature with Entanglement Permutation

    Science.gov (United States)

    Lou, Xiaoping; Chen, Zhigang; Guo, Ying

    2015-09-01

    Motivated by the permutation encryption algorithm, a weak quantum blind signature (QBS) scheme is proposed. It involves three participants, including the sender Alice, the signatory Bob and the trusted entity Charlie, in four phases, i.e., initializing phase, blinding phase, signing phase and verifying phase. In a small-scale quantum computation network, Alice blinds the message based on a quantum entanglement permutation encryption algorithm that embraces the chaotic position string. Bob signs the blinded message with private parameters shared beforehand while Charlie verifies the signature's validity and recovers the original message. Analysis shows that the proposed scheme achieves the secure blindness for the signer and traceability for the message owner with the aid of the authentic arbitrator who plays a crucial role when a dispute arises. In addition, the signature can neither be forged nor disavowed by the malicious attackers. It has a wide application to E-voting and E-payment system, etc.

  15. Reconstruction algorithm for error diffused halftones using binary permutation filters

    Science.gov (United States)

    Kim, Yeong-Taeg; Arce, Gonzalo R.

    1994-09-01

    This paper describes an inverse halftoning algorithm to reconstruct a continuous-tone image given its error diffused halftone. We develop a modular class of non-linear filters, denoted as a class of binary permutation filters, which can reconstruct the continuous-tone information preserving image details and edges which provide important visual cues. The proposed non- linear reconstruction algorithm is based on the space-rank ordering of the halftone samples, which is provided by the multiset permutation of the `on' pixels in a halftone observation window. By varying the space-rank order information utilized in the estimate, for a given window size, we obtain a wide range of filters. A constrained LMS type algorithm is employed to design optimal reconstruction filters which minimize the reconstruction mean squared error. We present simulations showing that the proposed class of filters is modular, robust to image source characteristics, and that the results produce high visual quality image reconstruction.

  16. A random-permutations-based approach to fast read alignment.

    Science.gov (United States)

    Lederman, Roy

    2013-01-01

    Read alignment is a computational bottleneck in some sequencing projects. Most of the existing software packages for read alignment are based on two algorithmic approaches: prefix-trees and hash-tables. We propose a new approach to read alignment using random permutations of strings. We present a prototype implementation and experiments performed with simulated and real reads of human DNA. Our experiments indicate that this permutations-based prototype is several times faster than comparable programs for fast read alignment and that it aligns more reads correctly. This approach may lead to improved speed, sensitivity, and accuracy in read alignment. The algorithm can also be used for specialized alignment applications and it can be extended to other related problems, such as assembly.More information: http://alignment.commons.yale.edu.

  17. Symbolic Detection of Permutation and Parity Symmetries of Evolution Equations

    KAUST Repository

    Alghamdi, Moataz

    2017-06-18

    We introduce a symbolic computational approach to detecting all permutation and parity symmetries in any general evolution equation, and to generating associated invariant polynomials, from given monomials, under the action of these symmetries. Traditionally, discrete point symmetries of differential equations are systemically found by solving complicated nonlinear systems of partial differential equations; in the presence of Lie symmetries, the process can be simplified further. Here, we show how to find parity- and permutation-type discrete symmetries purely based on algebraic calculations. Furthermore, we show that such symmetries always form groups, thereby allowing for the generation of new group-invariant conserved quantities from known conserved quantities. This work also contains an implementation of the said results in Mathematica. In addition, it includes, as a motivation for this work, an investigation of the connection between variational symmetries, described by local Lie groups, and conserved quantities in Hamiltonian systems.

  18. Information sets as permutation cycles for quadratic residue codes

    Directory of Open Access Journals (Sweden)

    Richard A. Jenson

    1982-01-01

    Full Text Available The two cases p=7 and p=23 are the only known cases where the automorphism group of the [p+1,   (p+1/2] extended binary quadratic residue code, O(p, properly contains PSL(2,p. These codes have some of their information sets represented as permutation cycles from Aut(Q(p. Analysis proves that all information sets of Q(7 are so represented but those of Q(23 are not.

  19. Permutation groups with bounded movement having maximum orbits

    Indian Academy of Sciences (India)

    Abstract. Let G be a permutation group on a set with no fixed points in and let m be a positive integer. If no element of G moves any subset of by more than m points (that is, | g\\ | ≤ m for every. ⊆ and g ∈ G), and also if each G-orbit has size greater than 2, then the number t of G-orbits in is at most 1. 2 (3m − 1). Moreover,.

  20. Circularly permuted green fluorescent proteins engineered to sense Ca2+

    OpenAIRE

    Nagai, Takeharu; Sawano, Asako; Park, Eun Sun; Miyawaki, Atsushi

    2001-01-01

    To visualize Ca2+-dependent protein–protein interactions in living cells by fluorescence readouts, we used a circularly permuted green fluorescent protein (cpGFP), in which the amino and carboxyl portions had been interchanged and reconnected by a short spacer between the original termini. The cpGFP was fused to calmodulin and its target peptide, M13. The chimeric protein, which we have named “pericam,” was fluorescent and its spectral properties changed reversibly...

  1. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  2. A Studentized Permutation Test for the Comparison of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Hahn, Ute

    A new test is proposed for the hypothesis that two (or more) observed point patterns are realizations of the same spatial point process model. To this end, the point patterns are divided into disjoint quadrats, on each of which an estimate of Ripley's K-function is calculated. The two groups...... of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable...

  3. Refined composite multiscale weighted-permutation entropy of financial time series

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  4. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  5. Developing the Business Modelling Method

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, B; Shishkov, Boris

    2011-01-01

    Currently, business modelling is an art, instead of a science, as no scientific method for business modelling exists. This, and the lack of using business models altogether, causes many projects to end after the pilot stage, unable to fulfil their apparent promise. We propose a structured method to

  6. A computationally efficient hypothesis testing method for epistasis analysis using multifactor dimensionality reduction.

    Science.gov (United States)

    Pattin, Kristine A; White, Bill C; Barney, Nate; Gui, Jiang; Nelson, Heather H; Kelsey, Karl T; Andrew, Angeline S; Karagas, Margaret R; Moore, Jason H

    2009-01-01

    Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free data mining method for detecting, characterizing, and interpreting epistasis in the absence of significant main effects in genetic and epidemiologic studies of complex traits such as disease susceptibility. The goal of MDR is to change the representation of the data using a constructive induction algorithm to make nonadditive interactions easier to detect using any classification method such as naïve Bayes or logistic regression. Traditionally, MDR constructed variables have been evaluated with a naïve Bayes classifier that is combined with 10-fold cross validation to obtain an estimate of predictive accuracy or generalizability of epistasis models. Traditionally, we have used permutation testing to statistically evaluate the significance of models obtained through MDR. The advantage of permutation testing is that it controls for false positives due to multiple testing. The disadvantage is that permutation testing is computationally expensive. This is an important issue that arises in the context of detecting epistasis on a genome-wide scale. The goal of the present study was to develop and evaluate several alternatives to large-scale permutation testing for assessing the statistical significance of MDR models. Using data simulated from 70 different epistasis models, we compared the power and type I error rate of MDR using a 1,000-fold permutation test with hypothesis testing using an extreme value distribution (EVD). We find that this new hypothesis testing method provides a reasonable alternative to the computationally expensive 1,000-fold permutation test and is 50 times faster. We then demonstrate this new method by applying it to a genetic epidemiology study of bladder cancer susceptibility that was previously analyzed using MDR and assessed using a 1,000-fold permutation test.

  7. Analysis of crude oil markets with improved multiscale weighted permutation entropy

    Science.gov (United States)

    Niu, Hongli; Wang, Jun; Liu, Cheng

    2018-03-01

    Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.

  8. Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach

    International Nuclear Information System (INIS)

    Rouzo, H.L.

    1979-01-01

    The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references

  9. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  10. Simultaneous representations of semilattices by lattices with permutable congruences

    CERN Document Server

    Tuma, J; Tuma, Jiri; Wehrung, Friedrich

    2001-01-01

    The Congruence Lattice Problem (CLP), stated by R. P. Dilworth in the forties, asks whether every distributive {∨, 0}-semilatticeS is isomorphic to the semilattice Conc L of compact congruences of a lattice L. While this problem is still open, many partial solutions have been obtained, positive and negative as well. The solution to CLP is known to be positive for all S such that $|S|\\leq\\aleph\\_1$. Furthermore, one can then take L with permutable congruences. This contrasts with the case where $|S| \\geq\\aleph\\_2$, where there are counterexamples S for which L cannot be, for example, sectionally complemented. We prove in this paper that the lattices of these counterexamples cannot have permutable congruences as well. We also isolate finite, combinatorial analogues of these results. All the "finite" statements that we obtain are amalgamation properties of the Conc functor. The strongest known positive results, which originate in earlier work by the first author, imply tha...

  11. Application of the Permutation Entropy over the Heart Rate Variability for the Improvement of Electrocardiogram-based Sleep Breathing Pause Detection

    Directory of Open Access Journals (Sweden)

    Antonio G. Ravelo-García

    2015-02-01

    Full Text Available In this paper the permutation entropy (PE obtained from heart rate variability (HRV is analyzed in a statistical model. In this model we also integrate other feature extraction techniques, the cepstrum coefficients derived from the same HRV and a set of band powers obtained from the electrocardiogram derived respiratory (EDR signal. The aim of the model is detecting obstructive sleep apnea (OSA events. For this purpose, we apply two statistical classification methods: Logistic Regression (LR and Quadratic Discriminant Analysis (QDA. For testing the models we use seventy ECG recordings from the Physionet database which are divided into equal-size learning and testing sets. Both sets consist of 35 recordings, each containing a single ECG signal. In our experiments we have found that the features extracted from the EDR signal present a sensitivity of 65.6% and specificity of 87.7% (auc = 85 in the LR classifier, and sensitivity of 59.4% and specificity of 90.3% (auc = 83.9 in the QDA classifier. The HRV-based cepstrum coefficients present a sensitivity of 63.8% and specificity of 89.2% (auc = 86 in the LR classifier, and sensitivity of 67.2% and specificity of 86.8% (auc = 86.9 in the QDA. Subsequent tests show that the contribution of the permutation entropy increases the performance of the classifiers, implying that the complexity of RR interval time series play an important role in the breathing pauses detection. Particularly, when all features are jointly used, the quantification task reaches a sensitivity of 71.9% and specificity of 92.1% (auc = 90.3 for LR. Similarly, for QDA the sensitivity is 75.1% and the specificity is 90.5% (auc = 91.7.

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. Graph modeling systems and methods

    Science.gov (United States)

    Neergaard, Mike

    2015-10-13

    An apparatus and a method for vulnerability and reliability modeling are provided. The method generally includes constructing a graph model of a physical network using a computer, the graph model including a plurality of terminating vertices to represent nodes in the physical network, a plurality of edges to represent transmission paths in the physical network, and a non-terminating vertex to represent a non-nodal vulnerability along a transmission path in the physical network. The method additionally includes evaluating the vulnerability and reliability of the physical network using the constructed graph model, wherein the vulnerability and reliability evaluation includes a determination of whether each terminating and non-terminating vertex represents a critical point of failure. The method can be utilized to evaluate wide variety of networks, including power grid infrastructures, communication network topologies, and fluid distribution systems.

  14. Novel Approach for Lithium-Ion Battery On-Line Remaining Useful Life Prediction Based on Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Luping Chen

    2018-04-01

    Full Text Available The degradation of lithium-ion battery often leads to electrical system failure. Battery remaining useful life (RUL prediction can effectively prevent this failure. Battery capacity is usually utilized as health indicator (HI for RUL prediction. However, battery capacity is often estimated on-line and it is difficult to be obtained by monitoring on-line parameters. Therefore, there is a great need to find a simple and on-line prediction method to solve this issue. In this paper, as a novel HI, permutation entropy (PE is extracted from the discharge voltage curve for analyzing battery degradation. Then the similarity between PE and battery capacity are judged by Pearson and Spearman correlation analyses. Experiment results illustrate the effectiveness and excellent similar performance of the novel HI for battery fading indication. Furthermore, we propose a hybrid approach combining Variational mode decomposition (VMD denoising technique, autoregressive integrated moving average (ARIMA, and GM(1,1 models for RUL prediction. Experiment results illustrate the accuracy of the proposed approach for lithium-ion battery on-line RUL prediction.

  15. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  16. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  17. Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows

    DEFF Research Database (Denmark)

    Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl

    2015-01-01

    attacks. After formalizing the concept of guaranteed trail weights, we show a range of equivalence results for permutation layers in this context. We prove that the trail weight analysis when using arbitrary word-wise permutations, with rotations as a special case, reduces to a consideration of a specific...

  18. Characterizing permuted block randomization as a big stick procedure

    Directory of Open Access Journals (Sweden)

    Vance W. Berger

    2016-04-01

    Full Text Available There are numerous approaches to randomizing patients to treatment groups in clinical trials. The most popular is permuted block randomization, and a newer and better class, which is gaining in popularity, is the so-called class of MTI procedures, which use a big stick to force the allocation sequence back towards balance when it reaches the MTI (maximally tolerated imbalance. Three prominent members of this class are the aptly named big stick procedure, Chen's procedure, and the maximal procedure. As we shall establish in this article, blocked randomization, though not typically cast as an MTI procedure, does in fact use the big stick as well. We shall argue that its weaknesses, which are well known, arise precisely from its improper use, bordering on outright abuse, of this big stick. Just as rocket powered golf clubs add power to a golf swing, so too does the big stick used by blocked randomization hit with too much power. In addition, the big stick is invoked when it need not be, thereby resulting in the excessive prediction for which permuted blocks are legendary. We bridge the gap between the MTI procedures and block randomization by identifying a new randomization procedure intermediate between the two, namely based on an excessively powerful big stick, but one that is used only when needed. We shall then argue that the MTI procedures are all superior to this intermediate procedure by virtue of using a restrained big stick, and that this intermediate procedure is superior to block randomization by virtue of restraint in when the big stick is invoked. The transitivity property then completes our argument.

  19. Permutation forests for modeling word order in machine translation

    NARCIS (Netherlands)

    Stanojević, M.

    2017-01-01

    In natural language, there is only a limited space for variation in the word order of linguistic productions. From a linguistic perspective, word order is the result of multiple application of syntactic recursive functions. These syntactic operations produce hierarchical syntactic structures, as

  20. Circular Permutation of a Chaperonin Protein: Biophysics and Application to Nanotechnology

    Science.gov (United States)

    Paavola, Chad; Chan, Suzanne; Li, Yi-Fen; McMillan, R. Andrew; Trent, Jonathan

    2004-01-01

    We have designed five circular permutants of a chaperonin protein derived from the hyperthermophilic organism Sulfolobus shibatae. These permuted proteins were expressed in E. coli and are well-folded. Furthermore, all the permutants assemble into 18-mer double rings of the same form as the wild-type protein. We characterized the thermodynamics of folding for each permutant by both guanidine denaturation and differential scanning calorimetry. We also examined the assembly of chaperonin rings into higher order structures that may be used as nanoscale templates. The results show that circular permutation can be used to tune the thermodynamic properties of a protein template as well as facilitating the fusion of peptides, binding proteins or enzymes onto nanostructured templates.

  1. On permutation polynomials over finite fields: differences and iterations

    DEFF Research Database (Denmark)

    Anbar Meidl, Nurdagül; Odzak, Almasa; Patel, Vandita

    2017-01-01

    The Carlitz rank of a permutation polynomial f over a finite field Fq is a simple concept that was introduced in the last decade. Classifying permutations over Fq with respect to their Carlitz ranks has some advantages, for instance f with a given Carlitz rank can be approximated by a rational...... linear transformation. In this note we present our recent results on the permutation behaviour of polynomials f+g, where f is a permutation over Fq of a given Carlitz rank, and g ∈ Fq[x] is of prescribed degree. We describe the relation of this problem to the well-known Chowla-Zassenhaus conjecture. We...... also study iterations of permutation polynomials by using the approximation property that is mentioned above....

  2. Permutation entropy analysis based on Gini-Simpson index for financial time series

    Science.gov (United States)

    Jiang, Jun; Shang, Pengjian; Zhang, Zuoquan; Li, Xuemei

    2017-11-01

    In this paper, a new coefficient is proposed with the objective of quantifying the level of complexity for financial time series. For researching complexity measures from the view of entropy, we propose a new permutation entropy based on Gini-Simpson index (GPE). Logistic map is applied to simulate time series to show the accuracy of the GPE method, and expound the extreme robustness of our GPE by the results of simulated time series. Meanwhile, we compare the effect of the different order of GPE. And then we employ it to US and European and Chinese stock markets in order to reveal the inner mechanism hidden in the original financial time series. After comparison of these results of stock indexes, it can be concluded that the relevance of different stock markets are obvious. To study the complexity features and properties of financial time series, it can provide valuable information for understanding the inner mechanism of financial markets.

  3. Permutation entropy analysis of financial time series based on Hill's diversity number

    Science.gov (United States)

    Zhang, Yali; Shang, Pengjian

    2017-12-01

    In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.

  4. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  5. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  6. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  7. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  8. Efficiency and credit ratings: a permutation-information-theory analysis

    Science.gov (United States)

    Fernandez Bariviera, Aurelio; Zunino, Luciano; Belén Guercio, M.; Martinez, Lisana B.; Rosso, Osvaldo A.

    2013-08-01

    The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity-entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification.

  9. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  10. Balancedness of Permutation Games and Envy-Free Allocations in Indivisible Good Economies

    NARCIS (Netherlands)

    Klijn, F.; Tijs, S.H.; Hamers, H.J.M.

    1999-01-01

    We present a simple proof of the balancedness of permutation games. In the proof we use the existence of envy-free allocations in economies with indivisible objects, quasi-linear utility functions, and an amount of money.

  11. A mixed integer linear programming formulation for low discrepancy consecutive k-sums permutation problem

    Directory of Open Access Journals (Sweden)

    Bogdanović Milena

    2017-01-01

    Full Text Available In this paper, low discrepancy consecutive k-sums permutation problem is considered. A mixed integer linear programing (MILP formulation with a moderate number of variables and constraints is proposed. The correctness proof shows that the proposed formulation is equivalent to the basic definition of low discrepancy consecutive k-sums permutation problem. Computational results, obtained on standard CPLEX solver, give 88 new exact values, which clearly show the usefulness of the proposed MILP formulation.

  12. Circularly permuted green fluorescent proteins engineered to sense Ca2+.

    Science.gov (United States)

    Nagai, T; Sawano, A; Park, E S; Miyawaki, A

    2001-03-13

    To visualize Ca(2+)-dependent protein-protein interactions in living cells by fluorescence readouts, we used a circularly permuted green fluorescent protein (cpGFP), in which the amino and carboxyl portions had been interchanged and reconnected by a short spacer between the original termini. The cpGFP was fused to calmodulin and its target peptide, M13. The chimeric protein, which we have named "pericam," was fluorescent and its spectral properties changed reversibly with the amount of Ca(2+), probably because of the interaction between calmodulin and M13 leading to an alteration of the environment surrounding the chromophore. Three types of pericam were obtained by mutating several amino acids adjacent to the chromophore. Of these, "flash-pericam" became brighter with Ca(2+), whereas "inverse-pericam" dimmed. On the other hand, "ratiometric-pericam" had an excitation wavelength changing in a Ca(2+)-dependent manner. All of the pericams expressed in HeLa cells were able to monitor free Ca(2+) dynamics, such as Ca(2+) oscillations in the cytosol and the nucleus. Ca(2+) imaging using high-speed confocal line-scanning microscopy and a flash-pericam allowed to detect the free propagation of Ca(2+) ions across the nuclear envelope. Then, free Ca(2+) concentrations in the nucleus and mitochondria were simultaneously measured by using ratiometric-pericams having appropriate localization signals, revealing that extra-mitochondrial Ca(2+) transients caused rapid changes in the concentration of mitochondrial Ca(2+). Finally, a "split-pericam" was made by deleting the linker in the flash-pericam. The Ca(2+)-dependent interaction between calmodulin and M13 in HeLa cells was monitored by the association of the two halves of GFP, neither of which was fluorescent by itself.

  13. Transportation Mode Detection Based on Permutation Entropy and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2015-01-01

    Full Text Available With the increasing prevalence of GPS devices and mobile phones, transportation mode detection based on GPS data has been a hot topic in GPS trajectory data analysis. Transportation modes such as walking, driving, bus, and taxi denote an important characteristic of the mobile user. Longitude, latitude, speed, acceleration, and direction are usually used as features in transportation mode detection. In this paper, first, we explore the possibility of using Permutation Entropy (PE of speed, a measure of complexity and uncertainty of GPS trajectory segment, as a feature for transportation mode detection. Second, we employ Extreme Learning Machine (ELM to distinguish GPS trajectory segments of different transportation. Finally, to evaluate the performance of the proposed method, we make experiments on GeoLife dataset. Experiments results show that we can get more than 50% accuracy when only using PE as a feature to characterize trajectory sequence. PE can indeed be effectively used to detect transportation mode from GPS trajectory. The proposed method has much better accuracy and faster running time than the methods based on the other features and SVM classifier.

  14. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Ruochen Liu

    2013-01-01

    Full Text Available The permutation flow shop scheduling problem (PFSSP is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO based memetic algorithm (MPSOMA is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS and individual improvement scheme (IIS. Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA, on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  15. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    Science.gov (United States)

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  16. Application of permutation to lossless compression of multispectral thematic mapper images

    Science.gov (United States)

    Arnavut, Ziya; Narumalani, Sunil

    1996-12-01

    The goal of data compression is to find shorter representations for any given data. In a data storage application, this s done in order to save storage space on an auxiliary device or, in the case of a communication scenario, to increase channel throughput. Because remotely sensed data require tremendous amounts of transmission and storage space, it is essential to find good algorithms that utilize the spatial and spectral characteristics of these data to compress them. A new technique is presented that uses a spectral and spatial correlation to create orderly data for the compression of multispectral remote sensing data, such as those acquired by the Landsat Thematic Mapper (TM) sensor system. The method described simply compresses one of the bands using the standard Joint Photographic Expert Group (JPEG) compression, and then orders the next band's data with respect to the previous sorting permutation. Then, the move-to-front coding technique is used to lower the source entropy before actually encoding the data. Owing to the correlation between visible bands of TM images, it was observed that this method yields tremendous gain on these brands (on an average 0.3 to 0.5 bits/pixel compared with lossless JPEG) and can be successfully used for multispectral images where the spectral distances between bands are close.

  17. A Scalable Permutation Approach Reveals Replication and Preservation Patterns of Network Modules in Large Datasets.

    Science.gov (United States)

    Ritchie, Scott C; Watts, Stephen; Fearnley, Liam G; Holt, Kathryn E; Abraham, Gad; Inouye, Michael

    2016-07-01

    Network modules-topologically distinct groups of edges and nodes-that are preserved across datasets can reveal common features of organisms, tissues, cell types, and molecules. Many statistics to identify such modules have been developed, but testing their significance requires heuristics. Here, we demonstrate that current methods for assessing module preservation are systematically biased and produce skewed p values. We introduce NetRep, a rapid and computationally efficient method that uses a permutation approach to score module preservation without assuming data are normally distributed. NetRep produces unbiased p values and can distinguish between true and false positives during multiple hypothesis testing. We use NetRep to quantify preservation of gene coexpression modules across murine brain, liver, adipose, and muscle tissues. Complex patterns of multi-tissue preservation were revealed, including a liver-derived housekeeping module that displayed adipose- and muscle-specific association with body weight. Finally, we demonstrate the broader applicability of NetRep by quantifying preservation of bacterial networks in gut microbiota between men and women. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. Imbalance properties of centre-stratified permuted-block and complete randomisation for several treatments in a clinical trial.

    Science.gov (United States)

    Anisimov, Vladimir V; Yeung, Wai Y; Coad, D Stephen

    2017-04-15

    Randomisation schemes are rules that assign patients to treatments in a clinical trial. Many of these schemes have the common aim of maintaining balance in the numbers of patients across treatment groups. The properties of imbalance that have been investigated in the literature are based on two treatment groups. In this paper, their properties for K > 2 treatments are studied for two randomisation schemes: centre-stratified permuted-block and complete randomisation. For both randomisation schemes, analytical approaches are investigated assuming that the patient recruitment process follows a Poisson-gamma model. When the number of centres involved in a trial is large, the imbalance for both schemes is approximated by a multivariate normal distribution. The accuracy of the approximations is assessed by simulation. A test for treatment differences is also considered for normal responses, and numerical values for its power are presented for centre-stratified permuted-block randomisation. To speed up the calculations, a combined analytical/approximate approach is used. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A Symmetric Plaintext-Related Color Image Encryption System Based on Bit Permutation

    Directory of Open Access Journals (Sweden)

    Shuting Cai

    2018-04-01

    Full Text Available Recently, a variety of chaos-based image encryption algorithms adopting the traditional permutation-diffusion structure have been suggested. Most of these algorithms cannot resist the powerful chosen-plaintext attack and chosen-ciphertext attack efficiently for less sensitivity to plain-image. This paper presents a symmetric color image encryption system based on plaintext-related random access bit-permutation mechanism (PRRABPM. In the proposed scheme, a new random access bit-permutation mechanism is used to shuffle 3D bit matrix transformed from an original color image, making the RGB components of the color image interact with each other. Furthermore, the key streams used in random access bit-permutation mechanism operation are extremely dependent on plain image in an ingenious way. Therefore, the encryption system is sensitive to tiny differences in key and original images, which means that it can efficiently resist chosen-plaintext attack and chosen-ciphertext attack. In the diffusion stage, the previous encrypted pixel is used to encrypt the current pixel. The simulation results show that even though the permutation-diffusion operation in our encryption scheme is performed only one time, the proposed algorithm has favorable security performance. Considering real-time applications, the encryption speed can be further improved.

  20. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  1. Mass textures and wolfenstein parameters from breaking the flavour permutational symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Mondragon, A; Rivera, T. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico,Mexico D.F. (Mexico); Rodriguez Jauregui, E. [Deutsches Elekronen-Synchrotron, Theory Group, Hamburg (Germany)

    2001-12-01

    We will give an overview of recent progress in the phenomenological study of quark mass matrices, quark flavour mixings and CP-violation with emphasis on the possibility of an underlying discrete, flavour permutational symmetry and its breaking, from which realistic models of mass generation could be built. The quark mixing angles and CP-violating phase, as well as the Wolfenstein parameters are given in terms of four quark mass ratios and only two parameters (Z{sup 1}/2, {phi}) characterizing the symmetry breaking pattern. Excellent agreement with all current experimental data is found. [Spanish] Daremos una visita panoramica del progreso reciente en el estudio fenomenologico de las matrices de masas y de mezclas del sabor de los quarks y la violacion de PC, con enfasis en la posibilidad de que, subyacentes al problema, se halle una simetria discreta, permutacional del sabor y su rompimiento a partir de las cuales se puedan construir modelos realistas de la generacion de las masas. Los angulos de mezcla de los quarks y la fase que viola CP, asi como los parametros de Wolfenstein se dan en terminos de cuatro razones de masas de los quarks y solamente dos parametros (Z{sup 1}/2, {phi}) que caracterizan el patron del rompimiento de la simetria. Los resultados se encuentran en excelente acuerdo con todos los datos experimentales mas recientes.

  2. A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.

    Science.gov (United States)

    Lione, G; Gonthier, P

    2016-01-01

    The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.

  3. Increasing power for voxel-wise genome-wide association studies: The random field theory, least square kernel machines and fast permutation procedures

    Science.gov (United States)

    Ge, Tian; Feng, Jianfeng; Hibar, Derrek P.; Thompson, Paul M.; Nichols, Thomas E.

    2013-01-01

    Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The

  4. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    Science.gov (United States)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  5. A rewired green fluorescent protein: folding and function in a nonsequential, noncircular GFP permutant.

    Science.gov (United States)

    Reeder, Philippa J; Huang, Yao-Ming; Dordick, Jonathan S; Bystroff, Christopher

    2010-12-28

    The sequential order of secondary structural elements in proteins affects the folding and activity to an unknown extent. To test the dependence on sequential connectivity, we reconnected secondary structural elements by their solvent-exposed ends, permuting their sequential order, called "rewiring". This new protein design strategy changes the topology of the backbone without changing the core side chain packing arrangement. While circular and noncircular permutations have been observed in protein structures that are not related by sequence homology, to date no one has attempted to rationally design and construct a protein with a sequence that is noncircularly permuted while conserving three-dimensional structure. Herein, we show that green fluorescent protein can be rewired, still functionally fold, and exhibit wild-type fluorescence excitation and emission spectra.

  6. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  7. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  8. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  9. Method for mapping population-based case-control studies: an application using generalized additive models

    Directory of Open Access Journals (Sweden)

    Aschengrau Ann

    2006-06-01

    Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.

  10. PLS-LS-SVM based modeling of ATR-IR as a robust method in detection and qualification of alprazolam.

    Science.gov (United States)

    Parhizkar, Elahehnaz; Ghazali, Mohammad; Ahmadi, Fatemeh; Sakhteman, Amirhossein

    2017-02-15

    According to the United States pharmacopeia (USP), Gold standard technique for Alprazolam determination in dosage forms is HPLC, an expensive and time-consuming method that is not easy to approach. In this study chemometrics assisted ATR-IR was introduced as an alternative method that produce similar results in fewer time and energy consumed manner. Fifty-eight samples containing different concentrations of commercial alprazolam were evaluated by HPLC and ATR-IR method. A preprocessing approach was applied to convert raw data obtained from ATR-IR spectra to normal matrix. Finally, a relationship between alprazolam concentrations achieved by HPLC and ATR-IR data was established using PLS-LS-SVM (partial least squares least squares support vector machines). Consequently, validity of the method was verified to yield a model with low error values (root mean square error of cross validation equal to 0.98). The model was able to predict about 99% of the samples according to R 2 of prediction set. Response permutation test was also applied to affirm that the model was not assessed by chance correlations. At conclusion, ATR-IR can be a reliable method in manufacturing process in detection and qualification of alprazolam content. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modelling Method of Recursive Entity

    Science.gov (United States)

    Amal, Rifai; Messoussi, Rochdi

    2012-01-01

    With the development of the Information and Communication Technologies, great masses of information are published in the Web. In order to reuse, to share and to organise them in distance formation and e-learning frameworks, several research projects have been achieved and various standards and modelling languages developed. In our previous…

  12. Computational fitness landscape for all gene-order permutations of an RNA virus.

    Directory of Open Access Journals (Sweden)

    Kwang-il Lim

    2009-02-01

    Full Text Available How does the growth of a virus depend on the linear arrangement of genes in its genome? Answering this question may enhance our basic understanding of virus evolution and advance applications of viruses as live attenuated vaccines, gene-therapy vectors, or anti-tumor therapeutics. We used a mathematical model for vesicular stomatitis virus (VSV, a prototype RNA virus that encodes five genes (N-P-M-G-L, to simulate the intracellular growth of all 120 possible gene-order variants. Simulated yields of virus infection varied by 6,000-fold and were found to be most sensitive to gene-order permutations that increased levels of the L gene transcript or reduced levels of the N gene transcript, the lowest and highest expressed genes of the wild-type virus, respectively. Effects of gene order on virus growth also depended upon the host-cell environment, reflecting different resources for protein synthesis and different cell susceptibilities to infection. Moreover, by computationally deleting intergenic attenuations, which define a key mechanism of transcriptional regulation in VSV, the variation in growth associated with the 120 gene-order variants was drastically narrowed from 6,000- to 20-fold, and many variants produced higher progeny yields than wild-type. These results suggest that regulation by intergenic attenuation preceded or co-evolved with the fixation of the wild type gene order in the evolution of VSV. In summary, our models have begun to reveal how gene functions, gene regulation, and genomic organization of viruses interact with their host environments to define processes of viral growth and evolution.

  13. All down the line: Permutation poetry in three South African journals ...

    African Journals Online (AJOL)

    In the late 1960s and early 1970s, the literary journals Wurm, Ophir and Izwi published a significant amount of formally experimental poetry by several local as well as a few European writers. This work included the specialised forms of procedural and permutation poetry, which were popular internationally during this time ...

  14. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  15. A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism

    Science.gov (United States)

    Ye, Ruisong

    2011-10-01

    This paper proposes a novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, in which permuting the positions of image pixels incorporates with changing the gray values of image pixels to confuse the relationship between cipher-image and plain-image. In the permutation process, a generalized Arnold map is utilized to generate one chaotic orbit used to get two index order sequences for the permutation of image pixel positions; in the diffusion process, a generalized Arnold map and a generalized Bernoulli shift map are employed to yield two pseudo-random gray value sequences for a two-way diffusion of gray values. The yielded gray value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results are carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack as well.

  16. Overview of NonParametric Combination-based permutation tests for Multivariate multi-sample problems

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2014-09-01

    Full Text Available In this work we present a review on nonparametric combination-based permutation tests along with SAS macros allowing to deal with two-sample and one-way MANOVA design problems, within NonParametric Combination methodology framework. Applications to real case studies are also presented.

  17. Developing a TQM quality management method model

    OpenAIRE

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This model describes the primary quality management methods which may be used to assess an organization's present strengths and weaknesses with regard to its use of quality management methods. This model ...

  18. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  19. Folding circular permutants of IL-1β: route selection driven by functional frustration.

    Directory of Open Access Journals (Sweden)

    Dominique T Capraro

    Full Text Available Interleukin-1β (IL-1β is the cytokine crucial to inflammatory and immune response. Two dominant routes are populated in the folding to native structure. These distinct routes are a result of the competition between early packing of the functional loops versus closure of the β-barrel to achieve efficient folding and have been observed both experimentally and computationally. Kinetic experiments on the WT protein established that the dominant route is characterized by early packing of geometrically frustrated functional loops. However, deletion of one of the functional loops, the β-bulge, switches the dominant route to an alternative, yet, as accessible, route, where the termini necessary for barrel closure form first. Here, we explore the effect of circular permutation of the WT sequence on the observed folding landscape with a combination of kinetic and thermodynamic experiments. Our experiments show that while the rate of formation of permutant protein is always slower than that observed for the WT sequence, the region of initial nucleation for all permutants is similar to that observed for the WT protein and occurs within a similar timescale. That is, even permutants with significant sequence rearrangement in which the functional-nucleus is placed at opposing ends of the polypeptide chain, fold by the dominant WT "functional loop-packing route", despite the entropic cost of having to fold the N- and C- termini early. Taken together, our results indicate that the early packing of the functional loops dominates the folding landscape in active proteins, and, despite the entropic penalty of coalescing the termini early, these proteins will populate an entropically unfavorable route in order to conserve function. More generally, circular permutation can elucidate the influence of local energetic stabilization of functional regions within a protein, where topological complexity creates a mismatch between energetics and topology in active

  20. A business case method for business models

    OpenAIRE

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model alternatives and choose the best one. In this article, we develop a business case method to objectively compare business models. It is an eight-step method, starting with business drivers and ending wit...

  1. Residual-based model diagnosis methods for mixture cure models.

    Science.gov (United States)

    Peng, Yingwei; Taylor, Jeremy M G

    2017-06-01

    Model diagnosis, an important issue in statistical modeling, has not yet been addressed adequately for cure models. We focus on mixture cure models in this work and propose some residual-based methods to examine the fit of the mixture cure model, particularly the fit of the latency part of the mixture cure model. The new methods extend the classical residual-based methods to the mixture cure model. Numerical work shows that the proposed methods are capable of detecting lack-of-fit of a mixture cure model, particularly in the latency part, such as outliers, improper covariate functional form, or nonproportionality in hazards if the proportional hazards assumption is employed in the latency part. The methods are illustrated with two real data sets that were previously analyzed with mixture cure models. © 2016, The International Biometric Society.

  2. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  3. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  4. Twitter's tweet method modelling and simulation

    Science.gov (United States)

    Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.

    2015-02-01

    This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.

  5. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  6. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  7. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  8. Level Crossing Methods in Stochastic Models

    CERN Document Server

    Brill, Percy H

    2008-01-01

    Since its inception in 1974, the level crossing approach for analyzing a large class of stochastic models has become increasingly popular among researchers. This volume traces the evolution of level crossing theory for obtaining probability distributions of state variables and demonstrates solution methods in a variety of stochastic models including: queues, inventories, dams, renewal models, counter models, pharmacokinetics, and the natural sciences. Results for both steady-state and transient distributions are given, and numerous examples help the reader apply the method to solve problems fa

  9. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  10. A permutation information theory tour through different interest rate maturities: the Libor case.

    Science.gov (United States)

    Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A

    2015-12-13

    This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).

  11. Genuine three-partite entanglement in coherent states via permutation and parity symmetries

    Science.gov (United States)

    Behzadi, N.

    2013-01-01

    By using the works, Spiridonov (Phys Rev A 52:1909, 1995), and Wang et al. (J Phys A Math Gen 33:7451, 2000), we propose an approach to obtain genuine three-partite entangled coherent states in which the permutation symmetry and the parity one play crucial roles. We exploit the permutation and parity symmetry to construct entanglement in the standard coherent states of a system composed of three-mode bosonic field and three identical atoms. It is shown that by making use of entanglement witnesses (EW) based on GHZ-states the reduced density matrices of the three-mode bosonic field and three-atomic subsystems, after encoding as three-qubit systems, in some range of their respective parameters, are genuinely entangled.

  12. Stochastic spectral-spatial permutation ordering combination for nonlocal morphological processing

    OpenAIRE

    Lézoray, Olivier

    2017-01-01

    International audience; The extension of mathematical morphology to mul-tivariate data has been an active research topic in recent years. In this paper we propose an approach that relies on the consensus combination of several stochastic permutation orderings. The latter are obtained by searching for a smooth shortest path on a graph representing an image. The construction of the graph can be based on both spatial and spectral information and naturally enables patch-based nonlocal processing.

  13. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  14. Revisiting the European sovereign bonds with a permutation-information-theory approach

    Science.gov (United States)

    Fernández Bariviera, Aurelio; Zunino, Luciano; Guercio, María Belén; Martinez, Lisana B.; Rosso, Osvaldo A.

    2013-12-01

    In this paper we study the evolution of the informational efficiency in its weak form for seventeen European sovereign bonds time series. We aim to assess the impact of two specific economic situations in the hypothetical random behavior of these time series: the establishment of a common currency and a wide and deep financial crisis. In order to evaluate the informational efficiency we use permutation quantifiers derived from information theory. Specifically, time series are ranked according to two metrics that measure the intrinsic structure of their correlations: permutation entropy and permutation statistical complexity. These measures provide the rectangular coordinates of the complexity-entropy causality plane; the planar location of the time series in this representation space reveals the degree of informational efficiency. According to our results, the currency union contributed to homogenize the stochastic characteristics of the time series and produced synchronization in the random behavior of them. Additionally, the 2008 financial crisis uncovered differences within the apparently homogeneous European sovereign markets and revealed country-specific characteristics that were partially hidden during the monetary union heyday.

  15. Probing the functional mechanism of Escherichia coli GroEL using circular permutation.

    Directory of Open Access Journals (Sweden)

    Tomohiro Mizobata

    Full Text Available BACKGROUND: The Escherichia coli chaperonin GroEL subunit consists of three domains linked via two hinge regions, and each domain is responsible for a specific role in the functional mechanism. Here, we have used circular permutation to study the structural and functional characteristics of the GroEL subunit. METHODOLOGY/PRINCIPAL FINDINGS: Three soluble, partially active mutants with polypeptide ends relocated into various positions of the apical domain of GroEL were isolated and studied. The basic functional hallmarks of GroEL (ATPase and chaperoning activities were retained in all three mutants. Certain functional characteristics, such as basal ATPase activity and ATPase inhibition by the cochaperonin GroES, differed in the mutants while at the same time, the ability to facilitate the refolding of rhodanese was roughly equal. Stopped-flow fluorescence experiments using a fluorescent variant of the circularly permuted GroEL CP376 revealed that a specific kinetic transition that reflects movements of the apical domain was missing in this mutant. This mutant also displayed several characteristics that suggested that the apical domains were behaving in an uncoordinated fashion. CONCLUSIONS/SIGNIFICANCE: The loss of apical domain coordination and a concomitant decrease in functional ability highlights the importance of certain conformational signals that are relayed through domain interlinks in GroEL. We propose that circular permutation is a very versatile tool to probe chaperonin structure and function.

  16. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    International Nuclear Information System (INIS)

    De Vogelaere, Frederick; Vingerhoets, Guy; Santens, Patrick; Boon, Paul; Achten, Erik

    2010-01-01

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  17. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    Energy Technology Data Exchange (ETDEWEB)

    De Vogelaere, Frederick; Vingerhoets, Guy [Ghent University, Laboratory for Neuropsychology, Department of Neurology, Ghent (Belgium); Santens, Patrick; Boon, Paul [Ghent University Hospital, Department of Neurology, Ghent (Belgium); Achten, Erik [Ghent University Hospital, Department of Radiology, Ghent (Belgium)

    2010-01-15

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  18. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  19. The housing market: modeling and assessment methods

    Directory of Open Access Journals (Sweden)

    Zapadnjuk Evgenij Aleksandrovich

    2016-10-01

    Full Text Available This paper analyzes the theoretical foundations of econometric simulation model that can be used to study the housing sector. Shows the methods of the practical use of correlation and regression models in the analysis of the status and prospects of development of the housing market.

  20. A Re-wired Green Fluorescent Protein: Folding and Function in a Non-sequential, Non-circular GFP Permutant

    OpenAIRE

    Reeder, Philippa J.; Huang, Yao-Ming; Dordick, Jonathan S.; Bystroff, Christopher

    2010-01-01

    The sequential order of secondary structural elements in proteins affects the folding and activity to an unknown extent. To test the dependence on sequential connectivity, secondary structural elements were reconnected by their solvent-exposed ends, permuting their sequential order, called “re-wiring.” This new protein design strategy changes the topology of the backbone without changing the core sidechain packing arrangement. While circular and non-circular permutations have been observed in...

  1. Sampling solution traces for the problem of sorting permutations by signed reversals

    Directory of Open Access Journals (Sweden)

    Baudet Christian

    2012-06-01

    Full Text Available Abstract Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and

  2. Measurement error models, methods, and applications

    CERN Document Server

    Buonaccorsi, John P

    2010-01-01

    Over the last 20 years, comprehensive strategies for treating measurement error in complex models and accounting for the use of extra data to estimate measurement error parameters have emerged. Focusing on both established and novel approaches, ""Measurement Error: Models, Methods, and Applications"" provides an overview of the main techniques and illustrates their application in various models. It describes the impacts of measurement errors on naive analyses that ignore them and presents ways to correct for them across a variety of statistical models, from simple one-sample problems to regres

  3. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  4. Modelling methods for milk intake measurements

    International Nuclear Information System (INIS)

    Coward, W.A.

    1999-01-01

    One component of the first Research Coordination Programme was a tutorial session on modelling in in-vivo tracer kinetic methods. This section describes the principles that are involved and how these can be translated into spreadsheets using Microsoft Excel and the SOLVER function to fit the model to the data. The purpose of this section is to describe the system developed within the RCM, and how it is used

  5. Modelling asteroid brightness variations. I - Numerical methods

    Science.gov (United States)

    Karttunen, H.

    1989-01-01

    A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.

  6. Modeling Storm Surges Using Discontinuous Galerkin Methods

    Science.gov (United States)

    2016-06-01

    STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Storm surges have a...model. One of the governing systems of equations used to model storm surges’ effects is the Shallow Water Equations (SWE). In this thesis, we solve the...fundamental truth, we found the error norm of the implicit method to be minimal. This study focuses on the impacts of a simulated storm surge in La Push

  7. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local...... programs. The method has successfully obtained solutions to large-scale classical FMO problems of simultaneous analysis and design, nested and dual formulations. The second goal is to extend the method and the FMO problem formulations to general laminated shell structures. The thesis additionally addresses...

  8. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  9. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  10. Developing a TQM quality management method model

    NARCIS (Netherlands)

    Zhang, Zhihai

    1997-01-01

    From an extensive review of total quality management literature, the external and internal environment affecting an organization's quality performance and the eleven primary elements of TQM are identified. Based on the primary TQM elements, a TQM quality management method model is developed. This

  11. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2011-01-01

    Efficiently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in finding train routes. Since the problem of routing trains on ...

  12. Railway Track Allocation: Models and Methods

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Eciently coordinating the movement of trains on a railway network is a central part of the planning process for a railway company. This paper reviews models and methods that have been proposed in the literature to assist planners in nding train routes. Since the problem of routing trains on a rai...

  13. Structure-based Design of Cyclically Permuted HIV-1 gp120 Trimers That Elicit Neutralizing Antibodies.

    Science.gov (United States)

    Kesavardhana, Sannula; Das, Raksha; Citron, Michael; Datta, Rohini; Ecto, Linda; Srilatha, Nonavinakere Seetharam; DiStefano, Daniel; Swoyer, Ryan; Joyce, Joseph G; Dutta, Somnath; LaBranche, Celia C; Montefiori, David C; Flynn, Jessica A; Varadarajan, Raghavan

    2017-01-06

    A major goal for HIV-1 vaccine development is an ability to elicit strong and durable broadly neutralizing antibody (bNAb) responses. The trimeric envelope glycoprotein (Env) spikes on HIV-1 are known to contain multiple epitopes that are susceptible to bNAbs isolated from infected individuals. Nonetheless, all trimeric and monomeric Env immunogens designed to date have failed to elicit such antibodies. We report the structure-guided design of HIV-1 cyclically permuted gp120 that forms homogeneous, stable trimers, and displays enhanced binding to multiple bNAbs, including VRC01, VRC03, VRC-PG04, PGT128, and the quaternary epitope-specific bNAbs PGT145 and PGDM1400. Constructs that were cyclically permuted in the V1 loop region and contained an N-terminal trimerization domain to stabilize V1V2-mediated quaternary interactions, showed the highest homogeneity and the best antigenic characteristics. In guinea pigs, a DNA prime-protein boost regimen with these new gp120 trimer immunogens elicited potent neutralizing antibody responses against highly sensitive Tier 1A isolates and weaker neutralizing antibody responses with an average titer of about 115 against a panel of heterologous Tier 2 isolates. A modest fraction of the Tier 2 virus neutralizing activity appeared to target the CD4 binding site on gp120. These results suggest that cyclically permuted HIV-1 gp120 trimers represent a viable platform in which further modifications may be made to eventually achieve protective bNAb responses. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  14. Remark on Hopf images in quantum permutation groups $S_n^+$

    OpenAIRE

    Józiak, Paweł

    2016-01-01

    Motivated by a question of A.~Skalski and P.M.~So{\\l}tan about inner faithfulness of the S.~Curran's map, we revisit the results and techniques of T.~Banica and J.~Bichon's Crelle paper and study some group-theoretic properties of the quantum permutation group on $4$ points. This enables us not only to answer the aforementioned question in positive in case $n=4, k=2$, but also to classify the automorphisms of $S_4^+$, describe all the embeddings $O_{-1}(2)\\subset S_4^+$ and show that all the ...

  15. A PSO-based hybrid metaheuristic for permutation flowshop scheduling problems.

    Science.gov (United States)

    Zhang, Le; Wu, Jinnan

    2014-01-01

    This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.

  16. Acceleration methods and models in Sn calculations

    International Nuclear Information System (INIS)

    Sbaffoni, M.M.; Abbate, M.J.

    1984-01-01

    In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es

  17. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    Science.gov (United States)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  18. Randomized Ensemble Methods for Classification Trees

    National Research Council Canada - National Science Library

    Kobayashi, Izumi

    2002-01-01

    ... proportional to the measure of goodness for a split We combine this method with a stopping rule which uses permutation of the outputs The other method perturbs the output and constructs a classifier...

  19. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  20. Mathematical methods and models in composites

    CERN Document Server

    Mantic, Vladislav

    2014-01-01

    This book provides a representative selection of the most relevant, innovative, and useful mathematical methods and models applied to the analysis and characterization of composites and their behaviour on micro-, meso-, and macroscale. It establishes the fundamentals for meaningful and accurate theoretical and computer modelling of these materials in the future. Although the book is primarily concerned with fibre-reinforced composites, which have ever-increasing applications in fields such as aerospace, many of the results presented can be applied to other kinds of composites. The topics cover

  1. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  2. Mathematical Models and Methods for Living Systems

    CERN Document Server

    Chaplain, Mark; Pugliese, Andrea

    2016-01-01

    The aim of these lecture notes is to give an introduction to several mathematical models and methods that can be used to describe the behaviour of living systems. This emerging field of application intrinsically requires the handling of phenomena occurring at different spatial scales and hence the use of multiscale methods. Modelling and simulating the mechanisms that cells use to move, self-organise and develop in tissues is not only fundamental to an understanding of embryonic development, but is also relevant in tissue engineering and in other environmental and industrial processes involving the growth and homeostasis of biological systems. Growth and organization processes are also important in many tissue degeneration and regeneration processes, such as tumour growth, tissue vascularization, heart and muscle functionality, and cardio-vascular diseases.

  3. Random walk generated by random permutations of {1, 2, 3, ..., n + 1}

    International Nuclear Information System (INIS)

    Oshanin, G; Voituriez, R

    2004-01-01

    We study properties of a non-Markovian random walk X (n) l , l = 0, 1, 2, ..., n, evolving in discrete time l on a one-dimensional lattice of integers, whose moves to the right or to the left are prescribed by the rise-and-descent sequences characterizing random permutations π of [n + 1] = {1, 2, 3, ..., n + 1}. We determine exactly the probability of finding the end-point X n = X (n) n of the trajectory of such a permutation-generated random walk (PGRW) at site X, and show that in the limit n → ∞ it converges to a normal distribution with a smaller, compared to the conventional Polya random walk, diffusion coefficient. We formulate, as well, an auxiliary stochastic process whose distribution is identical to the distribution of the intermediate points X (n) l , l < n, which enables us to obtain the probability measure of different excursions and to define the asymptotic distribution of the number of 'turns' of the PGRW trajectories

  4. Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.

    Science.gov (United States)

    Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang

    2015-01-01

    Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.

  5. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding

    Directory of Open Access Journals (Sweden)

    Xuncai Zhang

    2017-01-01

    Full Text Available With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule’s inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis.

  6. Multiscale permutation entropy analysis of laser beam wandering in isotropic turbulence

    Science.gov (United States)

    Olivares, Felipe; Zunino, Luciano; Gulich, Damián; Pérez, Darío G.; Rosso, Osvaldo A.

    2017-10-01

    We have experimentally quantified the temporal structural diversity from the coordinate fluctuations of a laser beam propagating through isotropic optical turbulence. The main focus here is on the characterization of the long-range correlations in the wandering of a thin Gaussian laser beam over a screen after propagating through a turbulent medium. To fulfill this goal, a laboratory-controlled experiment was conducted in which coordinate fluctuations of the laser beam were recorded at a sufficiently high sampling rate for a wide range of turbulent conditions. Horizontal and vertical displacements of the laser beam centroid were subsequently analyzed by implementing the symbolic technique based on ordinal patterns to estimate the well-known permutation entropy. We show that the permutation entropy estimations at multiple time scales evidence an interplay between different dynamical behaviors. More specifically, a crossover between two different scaling regimes is observed. We confirm a transition from an integrated stochastic process contaminated with electronic noise to a fractional Brownian motion with a Hurst exponent H =5 /6 as the sampling time increases. Besides, we are able to quantify, from the estimated entropy, the amount of electronic noise as a function of the turbulence strength. We have also demonstrated that these experimental observations are in very good agreement with numerical simulations of noisy fractional Brownian motions with a well-defined crossover between two different scaling regimes.

  7. A simplified formalism of the algebra of partially transposed permutation operators with applications

    Science.gov (United States)

    Mozrzymas, Marek; Studziński, Michał; Horodecki, Michał

    2018-03-01

    Herein we continue the study of the representation theory of the algebra of permutation operators acting on the n -fold tensor product space, partially transposed on the last subsystem. We develop the concept of partially reduced irreducible representations, which allows us to significantly simplify previously proved theorems and, most importantly, derive new results for irreducible representations of the mentioned algebra. In our analysis we are able to reduce the complexity of the central expressions by getting rid of sums over all permutations from the symmetric group, obtaining equations which are much more handy in practical applications. We also find relatively simple matrix representations for the generators of the underlying algebra. The obtained simplifications and developments are applied to derive the characteristics of a deterministic port-based teleportation scheme written purely in terms of irreducible representations of the studied algebra. We solve an eigenproblem for the generators of the algebra, which is the first step towards a hybrid port-based teleportation scheme and gives us new proofs of the asymptotic behaviour of teleportation fidelity. We also show a connection between the density operator characterising port-based teleportation and a particular matrix composed of an irreducible representation of the symmetric group, which encodes properties of the investigated algebra.

  8. The Schwarzschild Method for Building Galaxy Models

    Science.gov (United States)

    de Zeeuw, P. T.

    1998-09-01

    Martin Schwarzschild is most widely known as one of the towering figures of the theory of stellar evolution. However, from the early fifties onward he displayed a strong interest in dynamical astronomy, and in particular in its application to the structure of star clusters and galaxies. This resulted in a string of remarkable investigations, including the discovery of what became known as the Spitzer-Schwarzschild mechanism, the invention of the strip count method for mass determinations, the demonstration of the existence of dark matter on large scales, and the study of the nucleus of M31, based on his own Stratoscope II balloon observations. With his retirement approaching he decided to leave the field of stellar evolution, and to make his life--long hobby of stellar dynamics a full-time occupation, and to tackle the problem of self-consistent equilibria for elliptical galaxies, which by then were suspected to have a triaxial shape. Rather than following classical methods, which had trouble already in dealing with axisymmetric systems, he invented a simple numerical technique, which seeks to populate individual stellar orbits in the galaxy potential so as to reproduce the associated model density. This is now known as Schwarzschild's method. He showed by numerical calculation that most stellar orbits in a triaxial potential relevant for elliptical galaxies have two effective integrals of motion in addition to the classical energy integral, and then constructed the first ever self-consistent equilibrium model for a realistic triaxial galaxy. This provided a very strong stimulus to research in the dynamics of flattened galaxies. This talk will review how Schwarzschild's Method is used today, in problems ranging from the existence of equilibrium models as a function of shape, central cusp slope, tumbling rate, and presence of a central point mass, to modeling of individual galaxies to find stellar dynamical evidence for dark matter in extended halos, and/or massive

  9. An integrated modeling method for wind turbines

    Science.gov (United States)

    Fadaeinedjad, Roohollah

    Simulink environment to study the flicker contribution of the wind turbine in the wind-diesel system. By using a new wind power plant representation method, a large wind farm (consisting of 96 fixed speed wind turbines) is modelled to study the power quality of wind power system. The flicker contribution of wind farm is also studied with different wind turbine numbers, using the flickermeter model. Keywords. Simulink, FAST, TurbSim, AreoDyn, wind energy, doubly-fed induction generator, variable speed wind turbine, voltage sag, tower vibration, power quality, flicker, fixed speed wind turbine, wind shear, tower shadow, and yaw error.

  10. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. ACTIVE AND PARTICIPATORY METHODS IN BIOLOGY: MODELING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Antonela SBÎRCEA

    2011-01-01

    Full Text Available By using active and participatory methods it is hoped that pupils will not only come to a deeper understanding of the issues involved, but also that their motivation will be heightened. Pupil involvement in their learning is essential. Moreover, by using a variety of teaching techniques, we can help students make sense of the world in different ways, increasing the likelihood that they will develop a conceptual understanding. The teacher must be a good facilitator, monitoring and supporting group dynamics. Modeling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and pupils learn by observing. In the teaching of biology the didactic materials are fundamental tools in the teaching-learning process. Reading about scientific concepts or having a teacher explain them is not enough. Research has shown that modeling can be used across disciplines and in all grade and ability level classrooms. Using this type of instruction, teachers encourage learning.

  13. Surface physics theoretical models and experimental methods

    CERN Document Server

    Mamonova, Marina V; Prudnikova, I A

    2016-01-01

    The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

  14. Wind turbine noise modeling : a comparison of modeling methods

    International Nuclear Information System (INIS)

    Wang, L.; Strasser, A.

    2009-01-01

    All wind turbine arrays must undergo a noise impact assessment. DataKustik GmbH developed the Computer Aided Noise Abatement (Cadna/A) modeling software for calculating noise propagation to meet accepted protocols and international standards such as CONCAWE and ISO 9613 standards. The developer of Cadna/A, recommended the following 3 models for simulating wind turbine noise. These include a disk of point sources; a ring of point sources located at the tip of each blade; and a point source located at the top of the wind turbine tower hub. This paper presented an analytical comparison of the 3 models used for a typical wind turbine with a hub tower containing 3 propeller blades, a drive-train and top-mounted generator, as well as a representative wind farm, using Cadna/A. AUC, ISO and IEC criteria requirements for the meteorological input with Cadna/A for wind farm noise were also discussed. The noise predicting modelling approach was as follows: the simplest model, positioning a single point source at the top of the hub, can be used to predict sound levels for a typical wind turbine if receptors are located 250 m from the hub; a-weighted sound power levels of a wind turbine at cut-in and cut-off wind speed should be used in the models; 20 by 20 or 50 by 50 meter terrain parameters are suitable for large wind farm modeling; and ISO 9613-2 methods are recommended to predict wind farm noise with various metrological inputs based on local conditions. The study showed that the predicted sound level differences of the 3 wind turbine models using Cadna/A are less than 0.2 dB at receptors located greater than 250 m from the wind turbine hub, which fall within the accuracy range of the calculation method. All 3 models of wind turbine noise meet ISO9613-2 standards for noise prediction using Cadna/A. However, the single point source model was found to be the most efficient in terms of modeling run-time among the 3 models. 7 refs., 3 tabs., 15 figs.

  15. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  16. Mechanics, Models and Methods in Civil Engineering

    CERN Document Server

    Maceri, Franco

    2012-01-01

    „Mechanics, Models and Methods in Civil Engineering” collects leading papers dealing with actual Civil Engineering problems. The approach is in the line of the Italian-French school and therefore deeply couples mechanics and mathematics creating new predictive theories, enhancing clarity in understanding, and improving effectiveness in applications. The authors of the contributions collected here belong to the Lagrange Laboratory, an European Research Network active since many years. This book will be of a major interest for the reader aware of modern Civil Engineering.

  17. The forward tracking, an optical model method

    CERN Document Server

    Benayoun, M

    2002-01-01

    This Note describes the so-called Forward Tracking, and the underlying optical model, developed in the context of LHCb-Light studies. Starting from Velo tracks, cheated or found by real pattern recognition, the tracks are found in the ST1-3 chambers after the magnet. The main ingredient to the method is a parameterisation of the track in the ST1-3 region, based on the Velo track parameters and an X seed in one ST station. Performance with the LHCb-Minus and LHCb-Light setups is given.

  18. Experimental modeling methods in Industrial Engineering

    Directory of Open Access Journals (Sweden)

    Peter Trebuňa

    2009-03-01

    Full Text Available Dynamic approaches to a management system of the present industrial practice, forcing businesses to address management issues in-house continuous improvement of production and non-production processes. Experience has repeatedly demonstrated the need for a system approach not only in analysis but also in the planning and actual implementation of these processes. Therefore, the contribution is focused on the description of the modeling in industrial practice by a system approach, in order to avoid erroneous application of the decision to the implementation phase, and thus prevent any longer applying methods "attempt - fallacy".

  19. Finite element modeling methods for photonics

    CERN Document Server

    Rahman, B M Azizur

    2013-01-01

    The term photonics can be used loosely to refer to a vast array of components, devices, and technologies that in some way involve manipulation of light. One of the most powerful numerical approaches available to engineers developing photonic components and devices is the Finite Element Method (FEM), which can be used to model and simulate such components/devices and analyze how they will behave in response to various outside influences. This resource provides a comprehensive description of the formulation and applications of FEM in photonics applications ranging from telecommunications, astron

  20. Brain computation is organized via power-of-two-based permutation logic

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2016-11-01

    Full Text Available There is considerable scientific interest in understanding how cell assemblies - the long-presumed computational motif - are organized so that the brain can generate cognitive behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N=2i–1, giving rise to the specific-to-general cell-assembly organization capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based computational logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social cognitions. However, modulatory neurons, such as dopaminergic neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact despite the NMDA receptors – the synaptic switch for learning and memory – were deleted throughout adulthood, suggesting that it is likely developmentally pre-configured. Moreover, this logic is implemented in the cortex vertically via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques – which preferentially encode specific and low-combinatorial features and project inter-cortically – is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination, and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the non-randomness in layers 5/6 - which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems – is ideal for robust feedback-control of motivation, emotion, consciousness, and behaviors. These observations suggest that the brain’s basic

  1. A two-dimensional iterative panel method and boundary layer model for bio-inspired multi-body wings

    Science.gov (United States)

    Blower, Christopher J.; Dhruv, Akash; Wickenheiser, Adam M.

    2014-03-01

    The increased use of Unmanned Aerial Vehicles (UAVs) has created a continuous demand for improved flight capabilities and range of use. During the last decade, engineers have turned to bio-inspiration for new and innovative flow control methods for gust alleviation, maneuverability, and stability improvement using morphing aircraft wings. The bio-inspired wing design considered in this study mimics the flow manipulation techniques performed by birds to extend the operating envelope of UAVs through the installation of an array of feather-like panels across the airfoil's upper and lower surfaces while replacing the trailing edge flap. Each flap has the ability to deflect into both the airfoil and the inbound airflow using hinge points with a single degree-of-freedom, situated at 20%, 40%, 60% and 80% of the chord. The installation of the surface flaps offers configurations that enable advantageous maneuvers while alleviating gust disturbances. Due to the number of possible permutations available for the flap configurations, an iterative constant-strength doublet/source panel method has been developed with an integrated boundary layer model to calculate the pressure distribution and viscous drag over the wing's surface. As a result, the lift, drag and moment coefficients for each airfoil configuration can be calculated. The flight coefficients of this numerical method are validated using experimental data from a low speed suction wind tunnel operating at a Reynolds Number 300,000. This method enables the aerodynamic assessment of a morphing wing profile to be performed accurately and efficiently in comparison to Computational Fluid Dynamics methods and experiments as discussed herein.

  2. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  3. Optimization of a parallel permutation testing function for the SPRINT R package.

    Science.gov (United States)

    Petrou, Savvas; Sloan, Terence M; Mewissen, Muriel; Forster, Thorsten; Piotrowski, Michal; Dobrzelecki, Bartosz; Ghazal, Peter; Trew, Arthur; Hill, Jon

    2011-12-10

    The statistical language R and its Bioconductor package are favoured by many biostatisticians for processing microarray data. The amount of data produced by some analyses has reached the limits of many common bioinformatics computing infrastructures. High Performance Computing systems offer a solution to this issue. The Simple Parallel R Interface (SPRINT) is a package that provides biostatisticians with easy access to High Performance Computing systems and allows the addition of parallelized functions to R. Previous work has established that the SPRINT implementation of an R permutation testing function has close to optimal scaling on up to 512 processors on a supercomputer. Access to supercomputers, however, is not always possible, and so the work presented here compares the performance of the SPRINT implementation on a supercomputer with benchmarks on a range of platforms including cloud resources and a common desktop machine with multiprocessing capabilities. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Permutation entropy analysis of density fluctuations in the torsatron TJ-K

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dong; Ramisch, Mirko [Institut fuer Grenzflaechenverfahrenstechnik und Plasmatechnologie, Universitaet Stuttgart (Germany)

    2014-07-01

    In order to explore the nature of density fluctuations in the edge of magnetically confined fusion plasmas, the technique of permutation entropy and statistical complexity is used. The location of fluctuations on the entropy versus complexity plane classifies the dynamical behaviour of the system. The behaviour can be differentiated between stochastic and chaotic. The latter is supposed to be connected to a specific temporal form of intermittent density events, i.e. blobs, in the scrape-off layer (SOL). In this contribution, density fluctuations measured with a Langmuir probe in the torsatron TJ-K are analyzed with respect to the dynamical nature. Radial scans are performed across the separatrix to distinguish the dynamics in the inner edge and the SOL. Comparisons with well known test systems indeed point to a qualitative change in the dynamics across the separatrix. In the region of maximum density gradient, the fluctuations are characterized by minimum entropy. The results will be discussed on separated scales.

  5. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  6. Functional methods in the generalized Dicke model

    International Nuclear Information System (INIS)

    Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F.

    2007-01-01

    The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g 1 and g 2 for rotating and counter-rotating terms respectively. In the limit N -> ∞, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g 1 ≠ 0 and g 2 ≠ 0). There is quantum critical behavior when the coupling constants g 1 and g 2 satisfy g 1 + g 2 =(ω 0 Ω) 1/2 , where ω 0 is the frequency of the mode of the field and Ω is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g 1 ≠ 0 and g 2 ≠ 0, recovering the well known results. Second, the case g 1 ≠ 0 and g 2 ≠ 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g 2 (ω 0 Ω) 1/2 , and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)

  7. Every ternary permutation constraint satisfaction problem parameterized above average has a kernel with a quadratic number of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2012-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...

  8. Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations (Extended Abstract)

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Knudsen, L.R.; Leander, Gregor

    2012-01-01

    at least 22n/3 queries to the underlying permutations to be able to distinguish the construction from random. We argue further that the bound is tight for t = 2 but there is a gap in the bounds for t > 2, which is left as an open and interesting problem. Additionally, in terms of statistical attacks, we...

  9. Mathematical models and methods for planet Earth

    CERN Document Server

    Locatelli, Ugo; Ruggeri, Tommaso; Strickland, Elisabetta

    2014-01-01

    In 2013 several scientific activities have been devoted to mathematical researches for the study of planet Earth. The current volume presents a selection of the highly topical issues presented at the workshop “Mathematical Models and Methods for Planet Earth”, held in Roma (Italy), in May 2013. The fields of interest span from impacts of dangerous asteroids to the safeguard from space debris, from climatic changes to monitoring geological events, from the study of tumor growth to sociological problems. In all these fields the mathematical studies play a relevant role as a tool for the analysis of specific topics and as an ingredient of multidisciplinary problems. To investigate these problems we will see many different mathematical tools at work: just to mention some, stochastic processes, PDE, normal forms, chaos theory.

  10. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  11. FDTD method and models in optical education

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe

    2017-08-01

    In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.

  12. Forest Disturbance Mapping Using Dense Synthetic Landsat/MODIS Time-Series and Permutation-Based Disturbance Index Detection

    Directory of Open Access Journals (Sweden)

    David Frantz

    2016-03-01

    Full Text Available Spatio-temporal information on process-based forest loss is essential for a wide range of applications. Despite remote sensing being the only feasible means of monitoring forest change at regional or greater scales, there is no retrospectively available remote sensor that meets the demand of monitoring forests with the required spatial detail and guaranteed high temporal frequency. As an alternative, we employed the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM to produce a dense synthetic time series by fusing Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS nadir Bidirectional Reflectance Distribution Function (BRDF adjusted reflectance. Forest loss was detected by applying a multi-temporal disturbance detection approach implementing a Disturbance Index-based detection strategy. The detection thresholds were permutated with random numbers for the normal distribution in order to generate a multi-dimensional threshold confidence area. As a result, a more robust parameterization and a spatially more coherent detection could be achieved. (i The original Landsat time series; (ii synthetic time series; and a (iii combined hybrid approach were used to identify the timing and extent of disturbances. The identified clearings in the Landsat detection were verified using an annual woodland clearing dataset from Queensland’s Statewide Landcover and Trees Study. Disturbances caused by stand-replacing events were successfully identified. The increased temporal resolution of the synthetic time series indicated promising additional information on disturbance timing. The results of the hybrid detection unified the benefits of both approaches, i.e., the spatial quality and general accuracy of the Landsat detection and the increased temporal information of synthetic time series. Results indicated that a temporal improvement in the detection of the disturbance date could be achieved relative to the irregularly spaced Landsat

  13. Free wake models for vortex methods

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, K. [Technical Univ. Berlin, Aerospace Inst. (Germany)

    1997-08-01

    The blade element method works fast and good. For some problems (rotor shapes or flow conditions) it could be better to use vortex methods. Different methods for calculating a wake geometry will be presented. (au)

  14. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    . To illustrate these concepts a number of examples are used. These include models of polymer membranes, distillation and catalyst behaviour. Some detailed considerations within these models are stated and discussed. Model generation concepts are introduced and ideas of a reference model are given that shows...

  15. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  16. Model reduction methods for vector autoregressive processes

    CERN Document Server

    Brüggemann, Ralf

    2004-01-01

    1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo­ cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo­ sitions, have been developed over the years. The econometrics of VAR models and related quantities i...

  17. A business case method for business models

    NARCIS (Netherlands)

    Meertens, Lucas Onno; Starreveld, E.; Iacob, Maria Eugenia; Nieuwenhuis, Lambertus Johannes Maria; Shishkov, Boris

    2013-01-01

    Intuitively, business cases and business models are closely connected. However, a thorough literature review revealed no research on the combination of them. Besides that, little is written on the evaluation of business models at all. This makes it difficult to compare different business model

  18. Efficiently folding and circularly permuted variants of the Sapphire mutant of GFP

    Directory of Open Access Journals (Sweden)

    Griesbeck Oliver

    2003-05-01

    Full Text Available Abstract Background The green fluorescent protein (GFP has been widely used in cell biology as a marker of gene expression, label of cellular structures, fusion tag or as a crucial constituent of genetically encoded biosensors. Mutagenesis of the wildtype gene has yielded a number of improved variants such as EGFP or colour variants suitable for fluorescence resonance energy transfer (FRET. However, folding of some of these mutants is still a problem when targeted to certain organelles or fused to other proteins. Results By directed rational mutagenesis, we have produced a new variant of the Sapphire mutant of GFP with improved folding properties that turns out to be especially beneficial when expressed within organelles or as a fusion tag. Its absorption spectrum is pH-stable and the pKa of its emission is 4.9, making it very resistant to pH perturbation inside cells. Conclusion "T-Sapphire" and its circular permutations can be used as labels of proteins or cellular structures and as FRET donors in combination with red-fluorescent acceptor proteins such as DsRed, making it possible to completely separate donor and acceptor excitation and emission in intensity-based FRET experiments.

  19. Tunneling and Speedup in Quantum Optimization for Permutation-Symmetric Problems

    Directory of Open Access Journals (Sweden)

    Siddharth Muthukrishnan

    2016-07-01

    Full Text Available Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA, especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final cost function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Finally, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.

  20. Monitoring the informational efficiency of European corporate bond markets with dynamical permutation min-entropy

    Science.gov (United States)

    Zunino, Luciano; Bariviera, Aurelio F.; Guercio, M. Belén; Martinez, Lisana B.; Rosso, Osvaldo A.

    2016-08-01

    In this paper the permutation min-entropy has been implemented to unveil the presence of temporal structures in the daily values of European corporate bond indices from April 2001 to August 2015. More precisely, the informational efficiency evolution of the prices of fifteen sectorial indices has been carefully studied by estimating this information-theory-derived symbolic tool over a sliding time window. Such a dynamical analysis makes possible to obtain relevant conclusions about the effect that the 2008 credit crisis has had on the different European corporate bond sectors. It is found that the informational efficiency of some sectors, namely banks, financial services, insurance, and basic resources, has been strongly reduced due to the financial crisis whereas another set of sectors, integrated by chemicals, automobiles, media, energy, construction, industrial goods & services, technology, and telecommunications has only suffered a transitory loss of efficiency. Last but not least, the food & beverage, healthcare, and utilities sectors show a behavior close to a random walk practically along all the period of analysis, confirming a remarkable immunity against the 2008 financial crisis.

  1. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  2. Permutation entropy of finite-length white-noise time series.

    Science.gov (United States)

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  3. Cervical spine motion in manual versus Jackson table turning methods in a cadaveric global instability model.

    Science.gov (United States)

    DiPaola, Matthew J; DiPaola, Christian P; Conrad, Bryan P; Horodyski, MaryBeth; Del Rossi, Gianluca; Sawers, Andrew; Bloch, David; Rechtine, Glenn R

    2008-06-01

    A study of spine biomechanics in a cadaver model. To quantify motion in multiple axes created by transfer methods from stretcher to operating table in the prone position in a cervical global instability model. Patients with an unstable cervical spine remain at high risk for further secondary injury until their spine is adequately surgically stabilized. Previous studies have revealed that collars have significant, but limited benefit in preventing cervical motion when manually transferring patients. The literature proposes multiple methods of patient transfer, although no one method has been universally adopted. To date, no study has effectively evaluated the relationship between spine motion and various patient transfer methods to an operating room table for prone positioning. A global instability was surgically created at C5-6 in 4 fresh cadavers with no history of spine pathology. All cadavers were tested both with and without a rigid cervical collar in the intact and unstable state. Three headrest permutations were evaluated Mayfield (SM USA Inc), Prone View (Dupaco, Oceanside, CA), and Foam Pillow (OSI, Union City, CA). A trained group of medical staff performed each of 2 transfer methods: the "manual" and the "Jackson table" transfer. The manual technique entailed performing a standard rotation of the supine patient on a stretcher to the prone position on the operating room table with in-line manual cervical stabilization. The "Jackson" technique involved sliding the supine patient to the Jackson table (OSI, Union City, CA) with manual in-line cervical stabilization, securing them to the table, then initiating the table's lock and turn mechanism and rotating them into a prone position. An electromagnetic tracking device captured angular motion between the C5 and C6 vertebral segments. Repeated measures statistical analysis was performed to evaluate the following conditions: collar use (2 levels), headrest (3 levels), and turning technique (2 levels). For all

  4. Numerical methods in Markov chain modeling

    Science.gov (United States)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  5. Dynamic spatial panels : models, methods, and inferences

    NARCIS (Netherlands)

    Elhorst, J. Paul

    This paper provides a survey of the existing literature on the specification and estimation of dynamic spatial panel data models, a collection of models for spatial panels extended to include one or more of the following variables and/or error terms: a dependent variable lagged in time, a dependent

  6. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...

  7. Combining static and dynamic modelling methods: a comparison of four methods

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1995-01-01

    A conceptual model of a system is an explicit description of the behaviour required of the system. Methods for conceptual modelling include entity-relationship (ER) modelling, data flow modelling, Jackson System Development (JSD) and several object-oriented analysis method. Given the current

  8. A Pattern-Oriented Approach to a Methodical Evaluation of Modeling Methods

    Directory of Open Access Journals (Sweden)

    Michael Amberg

    1996-11-01

    Full Text Available The paper describes a pattern-oriented approach to evaluate modeling methods and to compare various methods with each other from a methodical viewpoint. A specific set of principles (the patterns is defined by investigating the notations and the documentation of comparable modeling methods. Each principle helps to examine some parts of the methods from a specific point of view. All principles together lead to an overall picture of the method under examination. First the core ("method neutral" meaning of each principle is described. Then the methods are examined regarding the principle. Afterwards the method specific interpretations are compared with each other and with the core meaning of the principle. By this procedure, the strengths and weaknesses of modeling methods regarding methodical aspects are identified. The principles are described uniformly using a principle description template according to descriptions of object oriented design patterns. The approach is demonstrated by evaluating a business process modeling method.

  9. Characterization of the permutations by block that have reversible one dimensional cellular automata; Caracterizacion de las permutaciones en bloque que representan automatas celulares unidimensionales reversibles

    Energy Technology Data Exchange (ETDEWEB)

    Seck Tuoh Mora, J. C. [Instituto Politecnico Nacional, Mexico, D. F. (Mexico)

    2001-06-01

    We present a review of reversible one dimensional cellular automata and their representation by block permutations. We analyze in detail the behavior of such block permutations to get their characterization. [Spanish] En el siguiente escrito se da una revision a la representacion y comportamiento de automatas celulares unidimensionales reversibles por medio de permutaciones en bloque. Hacemos un analisis detallado del comportamiento de dichas permutaciones para obtener su caracterizacion.

  10. Resampling methods for evaluating classification accuracy of wildlife habitat models

    Science.gov (United States)

    Verbyla, David L.; Litvaitis, John A.

    1989-11-01

    Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.

  11. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  12. A Comparison of Two Balance Calibration Model Building Methods

    Science.gov (United States)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  13. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...

  14. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  15. Ensemble Learning Method for Hidden Markov Models

    Science.gov (United States)

    2014-12-01

    Schunck, “Determining optical flow,” Artificial Inteligence , vol. 17, 1981. [77] “International campaign to ban landmines, landmiane monitor report 2013...outputs using a decision level fusion method such as an artificial v neural network or a hierarchical mixture of experts. Our approach was evaluated on...techniques such as simple algebraic [63], artificial neural networks (ANN) [1], and hierarchical mixture of experts (HME) [46] can be used. 3.3.4.1

  16. A Versatile Nonlinear Method for Predictive Modeling

    Science.gov (United States)

    Liou, Meng-Sing; Yao, Weigang

    2015-01-01

    As computational fluid dynamics techniques and tools become widely accepted for realworld practice today, it is intriguing to ask: what areas can it be utilized to its potential in the future. Some promising areas include design optimization and exploration of fluid dynamics phenomena (the concept of numerical wind tunnel), in which both have the common feature where some parameters are varied repeatedly and the computation can be costly. We are especially interested in the need for an accurate and efficient approach for handling these applications: (1) capturing complex nonlinear dynamics inherent in a system under consideration and (2) versatility (robustness) to encompass a range of parametric variations. In our previous paper, we proposed to use first-order Taylor expansion collected at numerous sampling points along a trajectory and assembled together via nonlinear weighting functions. The validity and performance of this approach was demonstrated for a number of problems with a vastly different input functions. In this study, we are especially interested in enhancing the method's accuracy; we extend it to include the second-orer Taylor expansion, which however requires a complicated evaluation of Hessian matrices for a system of equations, like in fluid dynamics. We propose a method to avoid these Hessian matrices, while maintaining the accuracy. Results based on the method are presented to confirm its validity.

  17. Diffusion in condensed matter methods, materials, models

    CERN Document Server

    Kärger, Jörg

    2005-01-01

    Diffusion as the process of particle transport due to stochastic movement is a phenomenon of crucial relevance for a large variety of processes and materials. This comprehensive, handbook- style survey of diffusion in condensed matter gives detailed insight into diffusion as the process of particle transport due to stochastic movement. Leading experts in the field describe in 23 chapters the different aspects of diffusion, covering microscopic and macroscopic experimental techniques and exemplary results for various classes of solids, liquids and interfaces as well as several theoretical concepts and models. Students and scientists in physics, chemistry, materials science, and biology will benefit from this detailed compilation.

  18. Extrudate Expansion Modelling through Dimensional Analysis Method

    DEFF Research Database (Denmark)

    to describe the extrudates expansion. From the three dimensionless groups, an equation with three experimentally determined parameters is derived to express the extrudate expansion. The model is evaluated with whole wheat flour and aquatic feed extrusion experimental data. The average deviations...... of the correlation are respectively 5.9% and 9% for the whole wheat flour and the aquatic feed extrusion. An alternative 4-coefficient equation is also suggested from the 3 dimensionless groups. The average deviations of the alternative equation are respectively 5.8% and 2.5% in correlation with the same set...

  19. Reverberation Modelling Using a Parabolic Equation Method

    Science.gov (United States)

    2012-10-01

    results obtained by other authors and methods. Résumé …..... RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes...PE pour parabolic equation), pour déterminer la faisabilité du calcul du champ acoustique et de la réverbération des échos de cibles dans différents...2012. Introduction ou contexte : RDDC Atlantique a élaboré un modèle de fouillis d’échos acoustiques fondé sur les modes normaux adiabatiques pour

  20. Current status of uncertainty analysis methods for computer models

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu

    1989-11-01

    This report surveys several existing uncertainty analysis methods for estimating computer output uncertainty caused by input uncertainties, illustrating application examples of those methods to three computer models, MARCH/CORRAL II, TERFOC and SPARC. Merits and limitations of the methods are assessed in the application, and recommendation for selecting uncertainty analysis methods is provided. (author)

  1. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  2. Laser filamentation mathematical methods and models

    CERN Document Server

    Lorin, Emmanuel; Moloney, Jerome

    2016-01-01

    This book is focused on the nonlinear theoretical and mathematical problems associated with ultrafast intense laser pulse propagation in gases and in particular, in air. With the aim of understanding the physics of filamentation in gases, solids, the atmosphere, and even biological tissue, specialists in nonlinear optics and filamentation from both physics and mathematics attempt to rigorously derive and analyze relevant non-perturbative models. Modern laser technology allows the generation of ultrafast (few cycle) laser pulses, with intensities exceeding the internal electric field in atoms and molecules (E=5x109 V/cm or intensity I = 3.5 x 1016 Watts/cm2 ). The interaction of such pulses with atoms and molecules leads to new, highly nonlinear nonperturbative regimes, where new physical phenomena, such as High Harmonic Generation (HHG), occur, and from which the shortest (attosecond - the natural time scale of the electron) pulses have been created. One of the major experimental discoveries in this nonlinear...

  3. A compositional method to model dependent failure behavior based on PoF models

    Directory of Open Access Journals (Sweden)

    Zhiguo ZENG

    2017-10-01

    Full Text Available In this paper, a new method is developed to model dependent failure behavior among failure mechanisms. Unlike the existing methods, the developed method models the root cause of the dependency explicitly, so that a deterministic model, rather than a probabilistic one, can be established. Three steps comprise the developed method. First, physics-of-failure (PoF models are utilized to model each failure mechanism. Then, interactions among failure mechanisms are modeled as a combination of three basic relations, competition, superposition and coupling. This is the reason why the method is referred to as “compositional method”. Finally, the PoF models and the interaction model are combined to develop a deterministic model of the dependent failure behavior. As a demonstration, the method is applied on an actual spool and the developed failure behavior model is validated by a wear test. The result demonstrates that the compositional method is an effective way to model dependent failure behavior.

  4. METHODICAL MODEL FOR TEACHING BASIC SKI TURN

    Directory of Open Access Journals (Sweden)

    Danijela Kuna

    2013-07-01

    Full Text Available With the aim of forming an expert model of the most important operators for basic ski turn teaching in ski schools, an experiment was conducted on a sample of 20 ski experts from different countries (Croatia, Bosnia and Herzegovina and Slovenia. From the group of the most commonly used operators for teaching basic ski turn the experts picked the 6 most important: uphill turn and jumping into snowplough, basic turn with hand sideways, basic turn with clapping, ski poles in front, ski poles on neck, uphill turn with active ski guiding. Afterwards, ranking and selection of the most efficient operators was carried out. Due to the set aim of research, a Chi square test was used, as well as the differences between frequencies of chosen operators, differences between values of the most important operators and differences between experts due to their nationality. Statistically significant differences were noticed between frequencies of chosen operators (c2= 24.61; p=0.01, while differences between values of the most important operators were not obvious (c2= 1.94; p=0.91. Meanwhile, the differences between experts concerning thier nationality were only noticeable in the expert evaluation of ski poles on neck operator (c2=7.83; p=0.02. Results of current research are reflected in obtaining useful information about methodological priciples of learning basic ski turn organization in ski schools.

  5. Drexel University Shell Model (DUSM) algorithm

    Science.gov (United States)

    Valliéres, Michel; Novoselsky, Akiva

    1994-03-01

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CFP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model.

  6. Drexel University Shell Model (DUSM) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vallieres, M. (Drexel Univ., Philadelphia, PA (United States). Dept. of Physics and Atmospheric Science); Novoselsky, A. (Hebrew Univ., Jerusalem (Israel). Dept. of Physics)

    1994-03-28

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CEP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model. (orig.)

  7. A Parametric Modelling Method for Dexterous Finger Reachable Workspaces

    Directory of Open Access Journals (Sweden)

    Wenzhen Yang

    2016-03-01

    Full Text Available The well-known algorithms, such as the graphic method, analytical method or numerical method, have some defects when modelling the dexterous finger workspace, which is a significant kinematical feature of dexterous hands and valuable for grasp planning, motion control and mechanical design. A novel modelling method with convenient and parametric performances is introduced to generate the dexterous-finger reachable workspace. This method constructs the geometric topology of the dexterous-finger reachable workspace, and uses a joint feature recognition algorithm to extract the kinematical parameters of the dexterous finger. Compared with graphic, analytical and numerical methods, this parametric modelling method can automatically and conveniently construct a more vivid workspace's forms and contours of the dexterous finger. The main contribution of this paper is that a workspace-modelling tool with high interactive efficiency is developed for designers to precisely visualize the dexterous-finger reachable workspace, which is valuable for analysing the flexibility of the dexterous finger.

  8. Modeling shallow water flows using the discontinuous Galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  9. Modeling shallow water flows using the discontinuous galerkin method

    CERN Document Server

    Khan, Abdul A

    2014-01-01

    Replacing the Traditional Physical Model Approach Computational models offer promise in improving the modeling of shallow water flows. As new techniques are considered, the process continues to change and evolve. Modeling Shallow Water Flows Using the Discontinuous Galerkin Method examines a technique that focuses on hyperbolic conservation laws and includes one-dimensional and two-dimensional shallow water flows and pollutant transports. Combines the Advantages of Finite Volume and Finite Element Methods This book explores the discontinuous Galerkin (DG) method, also known as the discontinuous finite element method, in depth. It introduces the DG method and its application to shallow water flows, as well as background information for implementing and applying this method for natural rivers. It considers dam-break problems, shock wave problems, and flows in different regimes (subcritical, supercritical, and transcritical). Readily Adaptable to the Real World While the DG method has been widely used in the fie...

  10. "Method, system and storage medium for generating virtual brick models"

    DEFF Research Database (Denmark)

    2009-01-01

    An exemplary embodiment is a method for generating a virtual brick model. The virtual brick models are generated by users and uploaded to a centralized host system. Users can build virtual models themselves or download and edit another user's virtual brick models while retaining the identity...... of the original virtual brick model. Routines are provided for both storing user created building steps in and generating automated building instructions for virtual brick models, generating a bill of materials for a virtual brick model and ordering physical bricks corresponding to a virtual brick model....

  11. Semi-Supervised Generation with Cluster-aware Generative Models

    DEFF Research Database (Denmark)

    Maaløe, Lars; Fraccaro, Marco; Winther, Ole

    2017-01-01

    Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster...... a log-likelihood of −79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods....

  12. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  13. IDEF method-based simulation model design and development framework

    Directory of Open Access Journals (Sweden)

    Ki-Young Jeong

    2009-09-01

    Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.

  14. Simulation of arc models with the block modelling method

    NARCIS (Netherlands)

    Thomas, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.

    2015-01-01

    Simulation of current interruption is currently performed with non-ideal switching devices for large power systems. Nevertheless, for small networks, non-ideal switching devices can be substituted by arc models. However, this substitution has a negative impact on the computation time. At the same

  15. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  16. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    . A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed......Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...

  17. [Model transfer method based on support vector machine].

    Science.gov (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi

    2007-01-01

    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  18. Systematic Methods and Tools for Computer Aided Modelling

    DEFF Research Database (Denmark)

    Fedorova, Marina

    Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused. In this project a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its new model import and export capabilities, have been developed. The new feature for model transfer...... has been developed by establishing a connection with an external modelling environment for code generation. The main contribution of this thesis is a creation of modelling templates and their connection with other modelling tools within a modelling framework. The goal was to create a user...

  19. Systems and methods for modeling and analyzing networks

    Science.gov (United States)

    Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W

    2013-10-29

    The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.

  20. Model-Based Methods in the Biopharmaceutical Process Lifecycle.

    Science.gov (United States)

    Kroll, Paul; Hofer, Alexandra; Ulonska, Sophia; Kager, Julian; Herwig, Christoph

    2017-12-01

    Model-based methods are increasingly used in all areas of biopharmaceutical process technology. They can be applied in the field of experimental design, process characterization, process design, monitoring and control. Benefits of these methods are lower experimental effort, process transparency, clear rationality behind decisions and increased process robustness. The possibility of applying methods adopted from different scientific domains accelerates this trend further. In addition, model-based methods can help to implement regulatory requirements as suggested by recent Quality by Design and validation initiatives. The aim of this review is to give an overview of the state of the art of model-based methods, their applications, further challenges and possible solutions in the biopharmaceutical process life cycle. Today, despite these advantages, the potential of model-based methods is still not fully exhausted in bioprocess technology. This is due to a lack of (i) acceptance of the users, (ii) user-friendly tools provided by existing methods, (iii) implementation in existing process control systems and (iv) clear workflows to set up specific process models. We propose that model-based methods be applied throughout the lifecycle of a biopharmaceutical process, starting with the set-up of a process model, which is used for monitoring and control of process parameters, and ending with continuous and iterative process improvement via data mining techniques.

  1. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ... 2001;. Keywords. Surrogate modelling; simulation optimization; groundwater remediation; polynomial regression; radial basis .... silty clay with a thickness of 1–2 m, while the lower part is made up of ...

  2. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  3. Comparison of surrogate models with different methods in ...

    Indian Academy of Sciences (India)

    Surrogate modelling is an effective tool for reducing computational burden of simulation optimization. In this article, polynomial regression (PR), radial basis function artificial neural network (RBFANN), and kriging methods were compared for building surrogate models of a multiphase flow simulation model in a simplified ...

  4. Two Undergraduate Process Modeling Courses Taught Using Inductive Learning Methods

    Science.gov (United States)

    Soroush, Masoud; Weinberger, Charles B.

    2010-01-01

    This manuscript presents a successful application of inductive learning in process modeling. It describes two process modeling courses that use inductive learning methods such as inquiry learning and problem-based learning, among others. The courses include a novel collection of multi-disciplinary complementary process modeling examples. They were…

  5. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  6. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  7. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  8. Design of nuclear power generation plants adopting model engineering method

    International Nuclear Information System (INIS)

    Waki, Masato

    1983-01-01

    The utilization of model engineering as the method of design has begun about ten years ago in nuclear power generation plants. By this method, the result of design can be confirmed three-dimensionally before actual production, and it is the quick and sure method to meet the various needs in design promptly. The adoption of models aims mainly at the improvement of the quality of design since the high safety is required for nuclear power plants in spite of the complex structure. The layout of nuclear power plants and piping design require the model engineering to arrange rationally enormous quantity of things in a limited period. As the method of model engineering, there are the use of check models and of design models, and recently, the latter method has been mainly taken. The procedure of manufacturing models and engineering is explained. After model engineering has been completed, the model information must be expressed in drawings, and the automation of this process has been attempted by various methods. The computer processing of design is in progress, and its role is explained (CAD system). (Kako, I.)

  9. Method of modeling the cognitive radio using Opnet Modeler

    OpenAIRE

    Yakovenko, I. V.; Poshtarenko, V. M.; Kostenko, R. V.

    2012-01-01

    This article is a review of the first wireless standard based on cognitive radio networks. The necessity of wireless networks based on the technology of cognitive radio. An example of the use of standard IEEE 802.22 in Wimax network through which was implemented in the simulation software environment Opnet Modeler. Schedules to check the performance of HTTP and FTP protocols CR network. Simulation results justify the use of standard IEEE 802.22 in wireless networks. Ця стаття являє собою о...

  10. Structural Analysis of Papain-Like NlpC/P60 Superfamily Enzymes with a Circularly Permuted Topology Reveals Potential Lipid Binding Sites

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Qingping; Rawlings, Neil D.; Chiu, Hsiu-Ju; Jaroszewski, Lukasz; Klock, Heath E.; Knuth, Mark W.; Miller, Mitchell D.; Elsliger, Marc-Andre; Deacon, Ashley M.; Godzik, Adam; Lesley, Scott A.; Wilson, Ian A. (SG); (Wellcome)

    2012-07-11

    NlpC/P60 superfamily papain-like enzymes play important roles in all kingdoms of life. Two members of this superfamily, LRAT-like and YaeF/YiiX-like families, were predicted to contain a catalytic domain that is circularly permuted such that the catalytic cysteine is located near the C-terminus, instead of at the N-terminus. These permuted enzymes are widespread in virus, pathogenic bacteria, and eukaryotes. We determined the crystal structure of a member of the YaeF/YiiX-like family from Bacillus cereus in complex with lysine. The structure, which adopts a ligand-induced, 'closed' conformation, confirms the circular permutation of catalytic residues. A comparative analysis of other related protein structures within the NlpC/P60 superfamily is presented. Permutated NlpC/P60 enzymes contain a similar conserved core and arrangement of catalytic residues, including a Cys/His-containing triad and an additional conserved tyrosine. More surprisingly, permuted enzymes have a hydrophobic S1 binding pocket that is distinct from previously characterized enzymes in the family, indicative of novel substrate specificity. Further analysis of a structural homolog, YiiX (PDB 2if6) identified a fatty acid in the conserved hydrophobic pocket, thus providing additional insights into possible function of these novel enzymes.

  11. Non-monotonic modelling from initial requirements: a proposal and comparison with monotonic modelling methods

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wupper, H.; Wieringa, Roelf J.

    2008-01-01

    Researchers make a significant effort to develop new modelling languages and tools. However, they spend less effort developing methods for constructing models using these languages and tools. We are developing a method for building an embedded system model for formal verification. Our method

  12. Domain decomposition methods in FVM approach to gravity field modelling.

    Science.gov (United States)

    Macák, Marek

    2017-04-01

    The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.

  13. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  14. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  15. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  16. Extension of local front reconstruction method with controlled coalescence model

    Science.gov (United States)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  17. Prospective Mathematics Teachers' Opinions about Mathematical Modeling Method and Applicability of This Method

    Science.gov (United States)

    Akgün, Levent

    2015-01-01

    The aim of this study is to identify prospective secondary mathematics teachers' opinions about the mathematical modeling method and the applicability of this method in high schools. The case study design, which is among the qualitative research methods, was used in the study. The study was conducted with six prospective secondary mathematics…

  18. cpRAS: a novel circularly permuted RAS-like GTPase domain with a highly scattered phylogenetic distribution

    Directory of Open Access Journals (Sweden)

    Novotny Marian

    2008-05-01

    Full Text Available Abstract A recent systematic survey suggested that the YRG (or YawG/YlqF family with the G4-G5-G1-G2-G3 order of the conserved GTPase motifs represents the only possible circularly permuted variation of the canonical GTPase structure. Here we show that a different circularly permuted GTPase domain actually does exist, conforming to the pattern G3-G4-G5-G1-G2. The domain, dubbed cpRAS, is a variant of RAS family GTPases and occurs in two types of larger proteins, either inserted into a region homologous to a bacterial group of proteins classified as COG2373 and potentially related to the alpha-2-macroglobulin family (so far a single protein in Dictyostelium or in combination with a von Willebrand factor type A (VWA domain. For the latter protein type, which was found in a few metazoans and several distantly related protists, existence in the common ancestor of opisthokonts, Amoebozoa and excavates followed by at least eight independent losses may be inferred. Our findings thus bring further evidence for the importance of parallel reduction of ancestral complexity in the eukaryotic evolution. Reviewers This article was reviewed by Lakshminarayan Iyer and Fyodor Kondrashov. For the full reviews, please go to the Reviewers' comments section.

  19. Stencil method: a Markov model for transport in porous media

    Science.gov (United States)

    Delgoshaie, A. H.; Tchelepi, H.; Jenny, P.

    2016-12-01

    In porous media the transport of fluid is dominated by flow-field heterogeneity resulting from the underlying transmissibility field. Since the transmissibility is highly uncertain, many realizations of a geological model are used to describe the statistics of the transport phenomena in a Monte Carlo framework. One possible way to avoid the high computational cost of physics-based Monte Carlo simulations is to model the velocity field as a Markov process and use Markov Chain Monte Carlo. In previous works multiple Markov models for discrete velocity processes have been proposed. These models can be divided into two general classes of Markov models in time and Markov models in space. Both of these choices have been shown to be effective to some extent. However some studies have suggested that the Markov property cannot be confirmed for a temporal Markov process; Therefore there is not a consensus about the validity and value of Markov models in time. Moreover, previous spacial Markov models have only been used for modeling transport on structured networks and can not be readily applied to model transport in unstructured networks. In this work we propose a novel approach for constructing a Markov model in time (stencil method) for a discrete velocity process. The results form the stencil method are compared to previously proposed spacial Markov models for structured networks. The stencil method is also applied to unstructured networks and can successfully describe the dispersion of particles in this setting. Our conclusion is that both temporal Markov models and spacial Markov models for discrete velocity processes can be valid for a range of model parameters. Moreover, we show that the stencil model can be more efficient in many practical settings and is suited to model dispersion both on structured and unstructured networks.

  20. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  1. A verification system survival probability assessment model test methods

    International Nuclear Information System (INIS)

    Jia Rui; Wu Qiang; Fu Jiwei; Cao Leituan; Zhang Junnan

    2014-01-01

    Subject to the limitations of funding and test conditions, the number of sub-samples of large complex system test less often. Under the single sample conditions, how to make an accurate evaluation of the performance, it is important for reinforcement of complex systems. It will be able to significantly improve the technical maturity of the assessment model, if that can experimental validation and evaluation model. In this paper, a verification system survival probability assessment model test method, the method by the test system sample test results, verify the correctness of the assessment model and a priori information. (authors)

  2. Models and estimation methods for clinical HIV-1 data

    Science.gov (United States)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  3. Compositions and methods for modeling Saccharomyces cerevisiae metabolism

    DEFF Research Database (Denmark)

    2012-01-01

    The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S. cerevisiae reactants to a plurality of S. cerevisiae reactions, a constraint set for the plurality of S. cerevisiae reactions......, and commands for determining a distribution of flux through the reactions that is predictive of a S. cerevisiae physiological function. A model of the invention can further include a gene database containing information characterizing the associated gene or genes. The invention further provides methods...... for making an in silica S. cerevisiae model and methods for determining a S. cerevisiae physiological function using a model of the invention. The invention provides an in silica model for determining a S. cerevisiae physiological function. The model includes a data structure relating a plurality of S...

  4. Models and methods for hot spot safety work

    DEFF Research Database (Denmark)

    Vistisen, Dorte

    2002-01-01

    and statistical methods less developed. The purpose of this thesis is to contribute to improving "State of the art" in Denmark. Basis for the systematic hot spot safety work are the models describing the variation in accident counts on the road network. In the thesis hierarchical models disaggregated on time...... is the task of improving road safety through alterations of the geometrical and environmental characteristics of the existing road network. The presently applied models and methods in hot spot safety work on the Danish road network were developed about two decades ago, when data was more limited and software...... are derived. The proposed models are shown to describe variation in accident counts better than the models currently at use in Denmark. The parameters of the models are estimated for the national and regional road network using data from the Road Sector Information system, VIS. No specific accident models...

  5. Mean photon number dependent variational method to the Rabi model

    International Nuclear Information System (INIS)

    Liu, Maoxin; Ying, Zu-Jian; Luo, Hong-Gang; An, Jun-Hong

    2015-01-01

    We present a mean photon number dependent variational method, which works well in the whole coupling regime if the photon energy is dominant over the spin-flipping, to evaluate the properties of the Rabi model for both the ground state and excited states. For the ground state, it is shown that the previous approximate methods, the generalized rotating-wave approximation (only working well in the strong coupling limit) and the generalized variational method (only working well in the weak coupling limit), can be recovered in the corresponding coupling limits. The key point of our method is to tailor the merits of these two existing methods by introducing a mean photon number dependent variational parameter. For the excited states, our method yields considerable improvements over the generalized rotating-wave approximation. The variational method proposed could be readily applied to more complex models, for which it is difficult to formulate an analytic formula. (paper)

  6. Physical Model Method for Seismic Study of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Bogdan Roşca

    2008-01-01

    Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.

  7. PerMallows: An R Package for Mallows and Generalized Mallows Models

    Directory of Open Access Journals (Sweden)

    Ekhine Irurozki

    2016-08-01

    Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.

  8. Extending product modeling methods for integrated product development

    DEFF Research Database (Denmark)

    Bonev, Martin; Wörösch, Michael; Hauksdóttir, Dagný

    2013-01-01

    Despite great efforts within the modeling domain, the majority of methods often address the uncommon design situation of an original product development. However, studies illustrate that development tasks are predominantly related to redesigning, improving, and extending already existing products...

  9. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2000-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures,

  10. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  11. Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science

    Science.gov (United States)

    Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are

  12. Turbulence modeling methods for the compressible Navier-Stokes equations

    Science.gov (United States)

    Coakley, T. J.

    1983-01-01

    Turbulence modeling methods for the compressible Navier-Stokes equations, including several zero- and two-equation eddy-viscosity models, are described and applied. Advantages and disadvantages of the models are discussed with respect to mathematical simplicity, conformity with physical theory, and numerical compatibility with methods. A new two-equation model is introduced which shows advantages over other two-equation models with regard to numerical compatibility and the ability to predict low-Reynolds-number transitional phenomena. Calculations of various transonic airfoil flows are compared with experimental results. A new implicit upwind-differencing method is used which enhances numerical stability and accuracy, and leads to rapidly convergent steady-state solutions.

  13. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  14. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  15. PRESTO: Rapid calculation of order statistic distributions and multiple-testing adjusted P-values via permutation for one and two-stage genetic association studies

    Directory of Open Access Journals (Sweden)

    Browning Brian L

    2008-07-01

    Full Text Available Abstract Background Large-scale genetic association studies can test hundreds of thousands of genetic markers for association with a trait. Since the genetic markers may be correlated, a Bonferroni correction is typically too stringent a correction for multiple testing. Permutation testing is a standard statistical technique for determining statistical significance when performing multiple correlated tests for genetic association. However, permutation testing for large-scale genetic association studies is computationally demanding and calls for optimized algorithms and software. PRESTO is a new software package for genetic association studies that performs fast computation of multiple-testing adjusted P-values via permutation of the trait. Results PRESTO is an order of magnitude faster than other existing permutation testing software, and can analyze a large genome-wide association study (500 K markers, 5 K individuals, 1 K permutations in approximately one hour of computing time. PRESTO has several unique features that are useful in a wide range of studies: it reports empirical null distributions for the top-ranked statistics (i.e. order statistics, it performs user-specified combinations of allelic and genotypic tests, it performs stratified analysis when sampled individuals are from multiple populations and each individual's population of origin is specified, and it determines significance levels for one and two-stage genotyping designs. PRESTO is designed for case-control studies, but can also be applied to trio data (parents and affected offspring if transmitted parental alleles are coded as case alleles and untransmitted parental alleles are coded as control alleles. Conclusion PRESTO is a platform-independent software package that performs fast and flexible permutation testing for genetic association studies. The PRESTO executable file, Java source code, example data, and documentation are freely available at http://www.stat.auckland.ac.nz/~browning/presto/presto.html.

  16. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  17. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  18. Characterisation of the Effects of Sleep Deprivation on the Electroencephalogram Using Permutation Lempel–Ziv Complexity, a Non-Linear Analysis Tool

    Directory of Open Access Journals (Sweden)

    Pinar Deniz Tosun

    2017-12-01

    Full Text Available Specific patterns of brain activity during sleep and waking are recorded in the electroencephalogram (EEG. Time-frequency analysis methods have been widely used to analyse the EEG and identified characteristic oscillations for each vigilance state (VS, i.e., wakefulness, rapid-eye movement (REM and non-rapid-eye movement (NREM sleep. However, other aspects such as change of patterns associated with brain dynamics may not be captured unless a non-linear-based analysis method is used. In this pilot study, Permutation Lempel–Ziv complexity (PLZC, a novel symbolic dynamics analysis method, was used to characterise the changes in the EEG in sleep and wakefulness during baseline and recovery from sleep deprivation (SD. The results obtained with PLZC were contrasted with a related non-linear method, Lempel–Ziv complexity (LZC. Both measure the emergence of new patterns. However, LZC is dependent on the absolute amplitude of the EEG, while PLZC is only dependent on the relative amplitude due to symbolisation procedure and thus, more resistant to noise. We showed that PLZC discriminates activated brain states associated with wakefulness and REM sleep, which both displayed higher complexity, compared to NREM sleep. Additionally, significantly lower PLZC values were measured in NREM sleep during the recovery period following SD compared to baseline, suggesting a reduced emergence of new activity patterns in the EEG. These findings were validated using PLZC on surrogate data. By contrast, LZC was merely reflecting changes in the spectral composition of the EEG. Overall, this study implies that PLZC is a robust non-linear complexity measure, which is not dependent on amplitude variations in the signal, and which may be useful to further assess EEG alterations induced by environmental or pharmacological manipulations.

  19. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  20. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  1. Attitude Research in Science Education: Contemporary Models and Methods.

    Science.gov (United States)

    Crawley, Frank E.; Kobala, Thomas R., Jr.

    1994-01-01

    Presents a summary of models and methods of attitude research which are embedded in the theoretical tenets of social psychology and in the broader framework of constructivism. Focuses on the construction of social reality rather than the construction of physical reality. Models include theory of reasoned action, theory of planned behavior, and…

  2. Introduction to Discrete Element Methods: Basics of Contact Force Models

    NARCIS (Netherlands)

    Luding, Stefan

    2008-01-01

    One challenge of today's research is the realistic simulation of granular materials, like sand or powders, consisting of millions of particles. In this article, the discrete element method (DEM), as based on molecular dynamics methods, is introduced. Contact models are at the physical basis of DEM.

  3. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  4. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  5. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  6. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  7. A discontinuous Galerkin method on kinetic flocking models

    OpenAIRE

    Tan, Changhui

    2014-01-01

    We study kinetic representations of flocking models. They arise from agent-based models for self-organized dynamics, such as Cucker-Smale and Motsch-Tadmor models. We prove flocking behavior for the kinetic descriptions of flocking systems, which indicates a concentration in velocity variable in infinite time. We propose a discontinuous Galerkin method to treat the asymptotic $\\delta$-singularity, and construct high order positive preserving scheme to solve kinetic flocking systems.

  8. Projection methods for the numerical solution of Markov chain models

    Science.gov (United States)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  9. A method for model identification and parameter estimation

    International Nuclear Information System (INIS)

    Bambach, M; Heinkenschloss, M; Herty, M

    2013-01-01

    We propose and analyze a new method for the identification of a parameter-dependent model that best describes a given system. This problem arises, for example, in the mathematical modeling of material behavior where several competing constitutive equations are available to describe a given material. In this case, the models are differential equations that arise from the different constitutive equations, and the unknown parameters are coefficients in the constitutive equations. One has to determine the best-suited constitutive equations for a given material and application from experiments. We assume that the true model is one of the N possible parameter-dependent models. To identify the correct model and the corresponding parameters, we can perform experiments, where for each experiment we prescribe an input to the system and observe a part of the system state. Our approach consists of two stages. In the first stage, for each pair of models we determine the experiment, i.e. system input and observation, that best differentiates between the two models, and measure the distance between the two models. Then we conduct N(N − 1) or, depending on the approach taken, N(N − 1)/2 experiments and use the result of the experiments as well as the previously computed model distances to determine the true model. We provide sufficient conditions on the model distances and measurement errors which guarantee that our approach identifies the correct model. Given the model, we identify the corresponding model parameters in the second stage. The problem in the second stage is a standard parameter estimation problem and we use a method suitable for the given application. We illustrate our approach on three examples, including one where the models are elliptic partial differential equations with different parameterized right-hand sides and an example where we identify the constitutive equation in a problem from computational viscoplasticity. (paper)

  10. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  11. Alternative methods to model frictional contact surfaces using NASTRAN

    Science.gov (United States)

    Hoang, Joseph

    1992-01-01

    Elongated (slotted) holes have been used extensively for the integration of equipment into Spacelab racks. In the past, this type of interface has been modeled assuming that there is not slippage between contact surfaces, or that there is no load transfer in the direction of the slot. Since the contact surfaces are bolted together, the contact friction provides a load path determined by the normal applied force (bolt preload) and the coefficient of friction. Three alternate methods that utilize spring elements, externally applied couples, and stress dependent elements are examined to model the contacted surfaces. Results of these methods are compared with results obtained from methods that use GAP elements and rigid elements.

  12. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  13. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  14. Quantitative sociodynamics stochastic methods and models of social interaction processes

    CERN Document Server

    Helbing, Dirk

    1995-01-01

    Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioural changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics but they have very often proved their explanatory power in chemistry, biology, economics and the social sciences. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces the most important concepts from nonlinear dynamics (synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches a very fundamental dynamic model is obtained which seems to open new perspectives in the social sciences. It includes many established models as special cases, e.g. the log...

  15. Coarse Analysis of Microscopic Models using Equation-Free Methods

    DEFF Research Database (Denmark)

    Marschler, Christian

    -dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis...... using short simulation bursts of computationally-expensive complex models. Those information is subsequently used to construct bifurcation diagrams that show the parameter dependence of solutions of the system. The methods developed for this thesis have been applied to a wide range of relevant problems....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...

  16. Quantitative Sociodynamics Stochastic Methods and Models of Social Interaction Processes

    CERN Document Server

    Helbing, Dirk

    2010-01-01

    This new edition of Quantitative Sociodynamics presents a general strategy for interdisciplinary model building and its application to a quantitative description of behavioral changes based on social interaction processes. Originally, the crucial methods for the modeling of complex systems (stochastic methods and nonlinear dynamics) were developed in physics and mathematics, but they have very often proven their explanatory power in chemistry, biology, economics and the social sciences as well. Quantitative Sociodynamics provides a unified and comprehensive overview of the different stochastic methods, their interrelations and properties. In addition, it introduces important concepts from nonlinear dynamics (e.g. synergetics, chaos theory). The applicability of these fascinating concepts to social phenomena is carefully discussed. By incorporating decision-theoretical approaches, a fundamental dynamic model is obtained, which opens new perspectives in the social sciences. It includes many established models a...

  17. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...... need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted....

  18. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  19. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  20. Dynamic systems models new methods of parameter and state estimation

    CERN Document Server

    2016-01-01

    This monograph is an exposition of a novel method for solving inverse problems, a method of parameter estimation for time series data collected from simulations of real experiments. These time series might be generated by measuring the dynamics of aircraft in flight, by the function of a hidden Markov model used in bioinformatics or speech recognition or when analyzing the dynamics of asset pricing provided by the nonlinear models of financial mathematics. Dynamic Systems Models demonstrates the use of algorithms based on polynomial approximation which have weaker requirements than already-popular iterative methods. Specifically, they do not require a first approximation of a root vector and they allow non-differentiable elements in the vector functions being approximated. The text covers all the points necessary for the understanding and use of polynomial approximation from the mathematical fundamentals, through algorithm development to the application of the method in, for instance, aeroplane flight dynamic...

  1. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  2. State recognition of the viscoelastic sandwich structure based on the adaptive redundant second generation wavelet packet transform, permutation entropy and the wavelet support vector machine

    International Nuclear Information System (INIS)

    Qu, Jinxiu; Zhang, Zhousuo; Guo, Ting; Luo, Xue; Sun, Chuang; Li, Bing; Wen, Jinpeng

    2014-01-01

    The viscoelastic sandwich structure is widely used in mechanical equipment, yet the structure always suffers from damage during long-term service. Therefore, state recognition of the viscoelastic sandwich structure is very necessary for monitoring structural health states and keeping the equipment running with high reliability. Through the analysis of vibration response signals, this paper presents a novel method for this task based on the adaptive redundant second generation wavelet packet transform (ARSGWPT), permutation entropy (PE) and the wavelet support vector machine (WSVM). In order to tackle the non-linearity existing in the structure vibration response, the PE is introduced to reveal the state changes of the structure. In the case of complex non-stationary vibration response signals, in order to obtain more effective information regarding the structural health states, the ARSGWPT, which can adaptively match the characteristics of a given signal, is proposed to process the vibration response signals, and then multiple PE features are extracted from the resultant wavelet packet coefficients. The WSVM, which can benefit from the conventional SVM as well as wavelet theory, is applied to classify the various structural states automatically. In this study, to achieve accurate and automated state recognition, the ARSGWPT, PE and WSVM are combined for signal processing, feature extraction and state classification, respectively. To demonstrate the effectiveness of the proposed method, a typical viscoelastic sandwich structure is designed, and the different degrees of preload on the structure are used to characterize the various looseness states. The test results show that the proposed method can reliably recognize the different looseness states of the viscoelastic sandwich structure, and the WSVM can achieve a better classification performance than the conventional SVM. Moreover, the superiority of the proposed ARSGWPT in processing the complex vibration response

  3. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  4. Modeling Music Emotion Judgments Using Machine Learning Methods.

    Science.gov (United States)

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  5. Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models

    Science.gov (United States)

    Marquette, Michele L.; Sognier, Marguerite A.

    2013-01-01

    An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.

  6. Modeling of piezoelectric devices with the finite volume method.

    Science.gov (United States)

    Bolborici, Valentin; Dawson, Francis; Pugh, Mary

    2010-07-01

    A partial differential equation (PDE) model for the dynamics of a thin piezoelectric plate in an electric field is presented. This PDE model is discretized via the finite volume method (FVM), resulting in a system of coupled ordinary differential equations. A static analysis and an eigenfrequency analysis are done with results compared with those provided by a commercial finite element (FEM) package. We find that fewer degrees of freedom are needed with the FVM model to reach a specified degree of accuracy. This suggests that the FVM model, which also has the advantage of an intuitive interpretation in terms of electrical circuits, may be a better choice in control situations.

  7. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  8. Theory of model Hamiltonians and method of functional integration

    International Nuclear Information System (INIS)

    Popov, V.N.

    1990-01-01

    Results on application of functional integration method to statistical physics systems with model Hamiltonians Dicke and Bardeen-Cooper-Schrieffer (BCS) are presented. Representations of statistical sums of these functional integration models are obtained. Asymptotic formulae (in N → ∞ thermodynamic range) for statistical sums of various modifications of the Dicke model as well as for the Green functions and Bose-excitations collective spectrum are exactly proved. Analogous results without exact substantiation are obtained for statistical sums and spectrum of Bose-excitations of the BCS model. 21 refs

  9. Revisiting the NEH algorithm- the power of job insertion technique for optimizing the makespan in permutation flow shop scheduling

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2016-04-01

    Full Text Available Permutation flow shop scheduling problems have been an interesting area of research for over six decades. Out of the several parameters, minimization of makespan has been studied much over the years. The problems are widely regarded as NP-Complete if the number of machines is more than three. As the computation time grows exponentially with respect to the problem size, heuristics and meta-heuristics have been proposed by many authors that give reasonably accurate and acceptable results. The NEH algorithm proposed in 1983 is still considered as one of the best simple, constructive heuristics for the minimization of makespan. This paper analyses the powerful job insertion technique used by NEH algorithm and proposes seven new variants, the complexity level remains same. 120 numbers of problem instances proposed by Taillard have been used for the purpose of validating the algorithms. Out of the seven, three produce better results than the original NEH algorithm.

  10. Interactive Modelling of Shapes Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations that are ......In this paper, we propose a technique for intuitive, interactive modelling of {3D} shapes. The technique is based on the Level-Set Method which has the virtue of easily handling changes to the topology of the represented solid. Furthermore, this method also leads to sculpting operations...... which are suitable for shape modelling are proposed. However, normally these would result in tools that would a ect the entire model. To facilitate local changes to the model, we introduce a windowing scheme which constrains the {LSM} to a ect only a small part of the model. The {LSM} based sculpting...... tools have been incorporated in our sculpting system which also includes facilities for volumetric {CSG} and several techniques for visualization....

  11. A statistical method for descriminating between alternative radiobiological models

    International Nuclear Information System (INIS)

    Kinsella, I.A.; Malone, J.F.

    1977-01-01

    Radiobiological models assist understanding of the development of radiation damage, and may provide a basis for extrapolating dose-effect curves from high to low dose regions. Many models have been proposed such as multitarget and its modifications, enzymatic models, and those with a quadratic dose response relationship (i.e. αD + βD 2 forms). It is difficult to distinguish between these because the statistical techniques used are almost always limited, in that one method can rarely be applied to the whole range of models. A general statistical procedure for parameter estimation (Maximum Liklihood Method) has been found applicable to a wide range of radiobiological models. The curve parameters are estimated using a computerised search that continues until the most likely set of values to fit the data is obtained. When the search is complete two procedures are carried out. First a goodness of fit test is applied which examines the applicability of an individual model to the data. Secondly an index is derived which provides an indication of the adequacy of any model compared with alternative models. Thus the models may be ranked according to how well they fit the data. For example, with one set of data, multitarget types were found to be more suitable than quadratic types (αD + βD 2 ). This method should be of assitance is evaluating various models. It may also be profitably applied to selection of the most appropriate model to use, when it is necessary to extrapolate from high to low doses

  12. The Multimorbidity Cluster Analysis Tool: Identifying Combinations and Permutations of Multiple Chronic Diseases Using a Record-Level Computational Analysis

    Directory of Open Access Journals (Sweden)

    Kathryn Nicholson

    2017-12-01

    Full Text Available Introduction: Multimorbidity, or the co-occurrence of multiple chronic health conditions within an individual, is an increasingly dominant presence and burden in modern health care systems.  To fully capture its complexity, further research is needed to uncover the patterns and consequences of these co-occurring health states.  As such, the Multimorbidity Cluster Analysis Tool and the accompanying Multimorbidity Cluster Analysis Toolkit have been created to allow researchers to identify distinct clusters that exist within a sample of participants or patients living with multimorbidity.  Development: The Tool and Toolkit were developed at Western University in London, Ontario, Canada.  This open-access computational program (JAVA code and executable file was developed and tested to support an analysis of thousands of individual records and up to 100 disease diagnoses or categories.  Application: The computational program can be adapted to the methodological elements of a research project, including type of data, type of chronic disease reporting, measurement of multimorbidity, sample size and research setting.  The computational program will identify all existing, and mutually exclusive, combinations and permutations within the dataset.  An application of this computational program is provided as an example, in which more than 75,000 individual records and 20 chronic disease categories resulted in the detection of 10,411 unique combinations and 24,647 unique permutations among female and male patients.  Discussion: The Tool and Toolkit are now available for use by researchers interested in exploring the complexities of multimorbidity.  Its careful use, and the comparison between results, will be valuable additions to the nuanced understanding of multimorbidity.

  13. Modeling Water Quality Parameters Using Data-driven Methods

    Directory of Open Access Journals (Sweden)

    Shima Soleimani

    2017-02-01

    Full Text Available Introduction: Surface water bodies are the most easily available water resources. Increase use and waste water withdrawal of surface water causes drastic changes in surface water quality. Water quality, importance as the most vulnerable and important water supply resources is absolutely clear. Unfortunately, in the recent years because of city population increase, economical improvement, and industrial product increase, entry of pollutants to water bodies has been increased. According to that water quality parameters express physical, chemical, and biological water features. So the importance of water quality monitoring is necessary more than before. Each of various uses of water, such as agriculture, drinking, industry, and aquaculture needs the water with a special quality. In the other hand, the exact estimation of concentration of water quality parameter is significant. Material and Methods: In this research, first two input variable models as selection methods (namely, correlation coefficient and principal component analysis were applied to select the model inputs. Data processing is consisting of three steps, (1 data considering, (2 identification of input data which have efficient on output data, and (3 selecting the training and testing data. Genetic Algorithm-Least Square Support Vector Regression (GA-LSSVR algorithm were developed to model the water quality parameters. In the LSSVR method is assumed that the relationship between input and output variables is nonlinear, but by using a nonlinear mapping relation can create a space which is named feature space in which relationship between input and output variables is defined linear. The developed algorithm is able to gain maximize the accuracy of the LSSVR method with auto LSSVR parameters. Genetic algorithm (GA is one of evolutionary algorithm which automatically can find the optimum coefficient of Least Square Support Vector Regression (LSSVR. The GA-LSSVR algorithm was employed to

  14. Statistical learning modeling method for space debris photometric measurement

    Science.gov (United States)

    Sun, Wenjing; Sun, Jinqiu; Zhang, Yanning; Li, Haisen

    2016-03-01

    Photometric measurement is an important way to identify the space debris, but the present methods of photometric measurement have many constraints on star image and need complex image processing. Aiming at the problems, a statistical learning modeling method for space debris photometric measurement is proposed based on the global consistency of the star image, and the statistical information of star images is used to eliminate the measurement noises. First, the known stars on the star image are divided into training stars and testing stars. Then, the training stars are selected as the least squares fitting parameters to construct the photometric measurement model, and the testing stars are used to calculate the measurement accuracy of the photometric measurement model. Experimental results show that, the accuracy of the proposed photometric measurement model is about 0.1 magnitudes.

  15. Numerical modeling of spray combustion with an advanced VOF method

    Science.gov (United States)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  16. Methods of mathematical modelling continuous systems and differential equations

    CERN Document Server

    Witelski, Thomas

    2015-01-01

    This book presents mathematical modelling and the integrated process of formulating sets of equations to describe real-world problems. It describes methods for obtaining solutions of challenging differential equations stemming from problems in areas such as chemical reactions, population dynamics, mechanical systems, and fluid mechanics. Chapters 1 to 4 cover essential topics in ordinary differential equations, transport equations and the calculus of variations that are important for formulating models. Chapters 5 to 11 then develop more advanced techniques including similarity solutions, matched asymptotic expansions, multiple scale analysis, long-wave models, and fast/slow dynamical systems. Methods of Mathematical Modelling will be useful for advanced undergraduate or beginning graduate students in applied mathematics, engineering and other applied sciences.

  17. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  18. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  19. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  20. Index to Nuclear Safety. A technical progress review by chronology, permuted title, and author. Vol. 11, No. 1 through Vol. 15, No. 6

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Klein, A.

    1975-04-01

    This issue of the Index to Nuclear Safety covers only articles included in Nuclear Safety, Vol. 11, No. 1, through Vol. 15, No. 6. This index is presented in three sections as follows: Chronological List of Articles by Volume; Permuted Title (KWIC) Index; and Author Index. (U.S.)

  1. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...

  2. Evaluation process radiological in ternopil region method of box models

    Directory of Open Access Journals (Sweden)

    І.В. Матвєєва

    2006-02-01

    Full Text Available  Results of radionuclides Sr-90 flows analyses in the ecosystem of Kotsubinchiky village of Ternopolskaya oblast were analyzed. The block-scheme of ecosystem and its mathematical model using the box models method were made. It allowed us to evaluate the ways of dose’s loadings formation of internal irradiation for miscellaneous population groups – working people, retirees, children, and also to prognose the dynamic of these loadings during the years after the Chernobyl accident.

  3. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  4. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  5. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  7. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  8. A delay financial model with stochastic volatility; martingale method

    Science.gov (United States)

    Lee, Min-Ku; Kim, Jeong-Hoon; Kim, Joocheol

    2011-08-01

    In this paper, we extend a delayed geometric Brownian model by adding a stochastic volatility term, which is driven by a hidden process of fast mean reverting diffusion, to the delayed model. Combining a martingale approach and an asymptotic method, we develop a theory for option pricing under this hybrid model. The core result obtained by our work is a proof that a discounted approximate option price can be decomposed as a martingale part plus a small term. Subsequently, a correction effect on the European option price is demonstrated both theoretically and numerically for a good agreement with practical results.

  9. Finite analytic method for modeling variably saturated flows.

    Science.gov (United States)

    Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

    2018-04-15

    This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Finite element method for incompressible two-fluid model using a fractional step method

    International Nuclear Information System (INIS)

    Uchiyama, Tomomi

    1997-01-01

    This paper presents a finite element method for an incompressible two-fluid model. The solution algorithm is based on the fractional step method, which is frequently used in the finite element calculation for single-phase flows. The calculating domain is divided into quadrilateral elements with four nodes. The Galerkin method is applied to derive the finite element equations. Air-water two-phase flows around a square cylinder are calculated by the finite element method. The calculation demonstrates the close relation between the volumetric fraction of the gas-phase and the vortices shed from the cylinder, which is favorably compared with the existing data. It is also confirmed that the present method allows the calculation with less CPU time than the SMAC finite element method proposed in my previous paper. (author)

  11. Comparison of methods for modelling geomagnetically induced currents

    Science.gov (United States)

    Boteler, D. H.; Pirjola, R. J.

    2014-09-01

    Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC) produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen-Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1) using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2) performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3) using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen-Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen-Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen-Pirjola method by introducing a node at the neutral point.

  12. Comparison of methods for modelling geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    D. H. Boteler

    2014-09-01

    Full Text Available Assessing the geomagnetic hazard to power systems requires reliable modelling of the geomagnetically induced currents (GIC produced in the power network. This paper compares the Nodal Admittance Matrix method with the Lehtinen–Pirjola method and shows them to be mathematically equivalent. GIC calculation using the Nodal Admittance Matrix method involves three steps: (1 using the voltage sources in the lines representing the induced geoelectric field to calculate equivalent current sources and summing these to obtain the nodal current sources, (2 performing the inversion of the admittance matrix and multiplying by the nodal current sources to obtain the nodal voltages, (3 using the nodal voltages to determine the currents in the lines and in the ground connections. In the Lehtinen–Pirjola method, steps 2 and 3 of the Nodal Admittance Matrix calculation are combined into one matrix expression. This involves inversion of a more complicated matrix but yields the currents to ground directly from the nodal current sources. To calculate GIC in multiple voltage levels of a power system, it is necessary to model the connections between voltage levels, not just the transmission lines and ground connections considered in traditional GIC modelling. Where GIC flow to ground through both the high-voltage and low-voltage windings of a transformer, they share a common path through the substation grounding resistance. This has been modelled previously by including non-zero, off-diagonal elements in the earthing impedance matrix of the Lehtinen–Pirjola method. However, this situation is more easily handled in both the Nodal Admittance Matrix method and the Lehtinen–Pirjola method by introducing a node at the neutral point.

  13. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders...... on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame...... the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  14. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  15. Modeling of Information Security Strategic Planning Methods and Expert Assessments

    Directory of Open Access Journals (Sweden)

    Alexander Panteleevich Batsula

    2014-09-01

    Full Text Available The article, paper addresses problem of increasing the level of information security. As a result, a method of increasing the level of information security is developed through its modeling of strategic planning SWOT-analysis using expert assessments.

  16. Ethnographic Decision Tree Modeling: A Research Method for Counseling Psychology.

    Science.gov (United States)

    Beck, Kirk A.

    2005-01-01

    This article describes ethnographic decision tree modeling (EDTM; C. H. Gladwin, 1989) as a mixed method design appropriate for counseling psychology research. EDTM is introduced and located within a postpositivist research paradigm. Decision theory that informs EDTM is reviewed, and the 2 phases of EDTM are highlighted. The 1st phase, model…

  17. Site Structure and User Navigation: Models, Measures and Methods

    NARCIS (Netherlands)

    Herder, E.; van Dijk, Elisabeth M.A.G.; Chen, S.Y; Magoulas, G.D.

    2004-01-01

    The analysis of the structure of Web sites and patterns of user navigation through these sites is gaining attention from different disciplines, as it enables unobtrusive discovery of user needs. In this chapter we give an overview of models, measures, and methods that can be used for analysis

  18. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available The calculation of unsteady air loads is an essential step in any aeroelastic analysis. The subsonic doublet lattice method (DLM) is used extensively for this purpose due to its simplicity and reliability. The body models available with the popular...

  19. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  20. A comparison of two methods for fitting the INDCLUS model

    NARCIS (Netherlands)

    Ten Berge, Jos M.F.; Kiers, Henk A.L.

    2005-01-01

    Chaturvedi and Carroll have proposed the SINDCLUS method for fitting the INDCLUS model. It is based on splitting the two appearances of the cluster matrix in the least squares fit function and relying on convergence to a solution where both cluster matrices coincide. Kiers has proposed an

  1. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  2. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  3. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  4. Cognitive psychology and self-reports: models and methods.

    Science.gov (United States)

    Jobe, Jared B

    2003-05-01

    This article describes the models and methods that cognitive psychologists and survey researchers use to evaluate and experimentally test cognitive issues in questionnaire design and subsequently improve self-report instruments. These models and methods assess the cognitive processes underlying how respondents comprehend and generate answers to self-report questions. Cognitive processing models are briefly described. Non-experimental methods--expert cognitive review, cognitive task analysis, focus groups, and cognitive interviews--are described. Examples are provided of how these methods were effectively used to identify cognitive self-report issues. Experimental methods--cognitive laboratory experiments, field tests, and experiments embedded in field surveys--are described. Examples are provided of: (a) how laboratory experiments were designed to test the capability and accuracy of respondents in performing the cognitive tasks required to answer self-report questions, (b) how a field experiment was conducted in which a cognitively designed questionnaire was effectively tested against the original questionnaire, and (c) how a cognitive experiment embedded in a field survey was conducted to test cognitive predictions.

  5. A new method to determine the number of experimental data using statistical modeling methods

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)

    2017-06-15

    For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.

  6. A constructive model potential method for atomic interactions

    Science.gov (United States)

    Bottcher, C.; Dalgarno, A.

    1974-01-01

    A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.

  7. Unicriterion Model: A Qualitative Decision Making Method That Promotes Ethics

    Directory of Open Access Journals (Sweden)

    Fernando Guilherme Silvano Lobo Pimentel

    2011-06-01

    Full Text Available Management decision making methods frequently adopt quantitativemodels of several criteria that bypass the question of whysome criteria are considered more important than others, whichmakes more difficult the task of delivering a transparent viewof preference structure priorities that might promote ethics andlearning and serve as a basis for future decisions. To tackle thisparticular shortcoming of usual methods, an alternative qualitativemethodology of aggregating preferences based on the rankingof criteria is proposed. Such an approach delivers a simpleand transparent model for the solution of each preference conflictfaced during the management decision making process. Themethod proceeds by breaking the decision problem into ‘two criteria– two alternatives’ scenarios, and translating the problem ofchoice between alternatives to a problem of choice between criteriawhenever appropriate. The unicriterion model method is illustratedby its application in a car purchase and a house purchasedecision problem.

  8. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  9. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  10. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  11. Model parameterization as method for data analysis in dendroecology

    Science.gov (United States)

    Tychkov, Ivan; Shishov, Vladimir; Popkova, Margarita

    2017-04-01

    There is no argue in usefulness of process-based models in ecological studies. Only limitations is how developed algorithm of model and how it will be applied for research. Simulation of tree-ring growth based on climate provides valuable information of tree-ring growth response on different environmental conditions, but also shares light on species-specifics of tree-ring growth process. Visual parameterization of the Vaganov-Shashkin model, allows to estimate non-linear response of tree-ring growth based on daily climate data: daily temperature, estimated day light and soil moisture. Previous using of the VS-Oscilloscope (a software tool of the visual parameterization) shows a good ability to recreate unique patterns of tree-ring growth for coniferous species in Siberian Russia, USA, China, Mediterranean Spain and Tunisia. But using of the models mostly is one-sided to better understand different tree growth processes, opposite to statistical methods of analysis (e.g. Generalized Linear Models, Mixed Models, Structural Equations.) which can be used for reconstruction and forecast. Usually the models are used either for checking of new hypothesis or quantitative assessment of physiological tree growth data to reveal a growth process mechanisms, while statistical methods used for data mining assessment and as a study tool itself. The high sensitivity of the model's VS-parameters reflects the ability of the model to simulate tree-ring growth and evaluates value of limiting growth climate factors. Precise parameterization of VS-Oscilloscope provides valuable information about growth processes of trees and under what conditions these processes occur (e.g. day of growth season onset, length of season, value of minimal/maximum temperature for tree-ring growth, formation of wide or narrow rings etc.). The work was supported by the Russian Science Foundation (RSF # 14-14-00219)

  12. Modeling of radionuclide migration through porous material with meshless method

    International Nuclear Information System (INIS)

    Vrankar, L.; Turk, G.; Runovc, F.

    2005-01-01

    To assess the long term safety of a radioactive waste disposal system, mathematical models are used to describe groundwater flow, chemistry and potential radionuclide migration through geological formations. A number of processes need to be considered when predicting the movement of radionuclides through the geosphere. The most important input data are obtained from field measurements, which are not completely available for all regions of interest. For example, the hydraulic conductivity as an input parameter varies from place to place. In such cases geostatistical science offers a variety of spatial estimation procedures. Methods for solving the solute transport equation can also be classified as Eulerian, Lagrangian and mixed. The numerical solution of partial differential equations (PDE) is usually obtained by finite difference methods (FDM), finite element methods (FEM), or finite volume methods (FVM). Kansa introduced the concept of solving partial differential equations using radial basis functions (RBF) for hyperbolic, parabolic and elliptic PDEs. Our goal was to present a relatively new approach to the modelling of radionuclide migration through the geosphere using radial basis function methods in Eulerian and Lagrangian coordinates. Radionuclide concentrations will also be calculated in heterogeneous and partly heterogeneous 2D porous media. We compared the meshless method with the traditional finite difference scheme. (author)

  13. A review of distributed parameter groundwater management modeling methods

    Science.gov (United States)

    Gorelick, Steven M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programing, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Groundwater hydraulic management models enable the determination of optimal locations and pumping rates of numerous wells under a variety of restrictions placed upon local drawdown, hydraulic gradients, and water production targets. Groundwater policy evaluation and allocation models can be used to study the influence upon regional groundwater use of institutional policies such as taxes and quotas. Furthermore, fairly complex groundwater-surface water allocation problems can be handled using system decomposition and multilevel optimization. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. Classes of research missing from the literature are groundwater quality management models involving nonlinear constraints, models which join groundwater hydraulic and quality simulations with political-economic management considerations, and management models that include parameter uncertainty.

  14. Semantic Model Driven Architecture Based Method for Enterprise Application Development

    Science.gov (United States)

    Wu, Minghui; Ying, Jing; Yan, Hui

    Enterprise applications have the requirements of meeting dynamic businesses processes and adopting lasted technologies flexibly, with to solve the problems caused by the nature of heterogeneous characteristic. Service-Oriented Architecture (SOA) is becoming a leading paradigm for business process integration. This research work focuses on business process modeling, proposes a semantic model-driven development method named SMDA combined with the Ontology and Model-Driven Architecture (MDA) technologies. The architecture of SMDA is presented in three orthogonal perspectives. (1) Vertical axis is the MDA 4 layers, the focus is UML profiles in M2 (meta-model layer) for ontology modeling, and three abstract levels: CIM, PIM and PSM modeling respectively. (2) Horizontal axis is different concerns involved in the development: Process, Application, Information, Organization, and Technology. (3) Traversal Axis is referred to aspects that have influence on other models of the cross-cutting axis: Architecture, Semantics, Aspect, and Pattern. The paper also introduces the modeling and transformation process in SMDA, and describes dynamic service composition supports briefly.

  15. Modeling of uncertainty in atmospheric transport system using hybrid method

    International Nuclear Information System (INIS)

    Pandey, M.; Ranade, Ashok; Brij Kumar; Datta, D.

    2012-01-01

    Atmospheric dispersion models are routinely used at nuclear and chemical plants to estimate exposure to the members of the public and occupational workers due to release of hazardous contaminants into the atmosphere. Atmospheric dispersion is a stochastic phenomenon and in general, the concentration of the contaminant estimated at a given time and at a predetermined location downwind of a source cannot be predicted precisely. Uncertainty in atmospheric dispersion model predictions is associated with: 'data' or 'parameter' uncertainty resulting from errors in the data used to execute and evaluate the model, uncertainties in empirical model parameters, and initial and boundary conditions; 'model' or 'structural' uncertainty arising from inaccurate treatment of dynamical and chemical processes, approximate numerical solutions, and internal model errors; and 'stochastic' uncertainty, which results from the turbulent nature of the atmosphere as well as from unpredictability of human activities related to emissions, The possibility theory based on fuzzy measure has been proposed in recent years as an alternative approach to address knowledge uncertainty of a model in situations where available information is too vague to represent the parameters statistically. The paper presents a novel approach (called Hybrid Method) to model knowledge uncertainty in a physical system by a combination of probabilistic and possibilistic representation of parametric uncertainties. As a case study, the proposed approach is applied for estimating the ground level concentration of hazardous contaminant in air due to atmospheric releases through the stack (chimney) of a nuclear plant. The application illustrates the potential of the proposed approach. (author)

  16. Evaluation of internal noise methods for Hotelling observer models

    International Nuclear Information System (INIS)

    Zhang Yani; Pham, Binh T.; Eckstein, Miguel P.

    2007-01-01

    The inclusion of internal noise in model observers is a common method to allow for quantitative comparisons between human and model observer performance in visual detection tasks. In this article, we studied two different strategies for inserting internal noise into Hotelling model observers. In the first strategy, internal noise was added to the output of individual channels: (a) Independent nonuniform channel noise, (b) independent uniform channel noise. In the second strategy, internal noise was added to the decision variable arising from the combination of channel responses. The standard deviation of the zero mean internal noise was either constant or proportional to: (a) the decision variable's standard deviation due to the external noise, (b) the decision variable's variance caused by the external noise, (c) the decision variable magnitude on a trial to trial basis. We tested three model observers: square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO) using a four alternative forced choice (4AFC) signal known exactly but variable task with a simulated signal embedded in real x-ray coronary angiogram backgrounds. The results showed that the internal noise method that led to the best prediction of human performance differed across the studied model observers. The CHO model best predicted human observer performance with the channel internal noise. The HO and LGHO best predicted human observer performance with the decision variable internal noise. The present results might guide researchers with the choice of methods to include internal noise into Hotelling model observers when evaluating and optimizing medical image quality

  17. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  18. Modeling crime events by d-separation method

    Science.gov (United States)

    Aarthee, R.; Ezhilmaran, D.

    2017-11-01

    Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.

  19. Numerical methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    2010-01-01

    but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...

  20. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...

  1. Soybean yield modeling using bootstrap methods for small samples

    Energy Technology Data Exchange (ETDEWEB)

    Dalposso, G.A.; Uribe-Opazo, M.A.; Johann, J.A.

    2016-11-01

    One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

  2. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  3. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  4. Review of Methods for Buildings Energy Performance Modelling

    Science.gov (United States)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting – replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance

  5. A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.

    Energy Technology Data Exchange (ETDEWEB)

    YUE,M.; SCHLUETER,R.

    2003-10-20

    A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.

  6. A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series

    Directory of Open Access Journals (Sweden)

    Fernando Luiz Cyrino Oliveira

    2014-01-01

    Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.

  7. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  8. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  9. Impacts modeling using the SPH particulate method. Case study

    International Nuclear Information System (INIS)

    Debord, R.

    1999-01-01

    The aim of this study is the modeling of the impact of melted metal on the reactor vessel head in the case of a core-meltdown accident. Modeling using the classical finite-element method alone is not sufficient but requires a coupling with particulate methods in order to take into account the behaviour of the corium. After a general introduction about particulate methods, the Nabor and SPH (smoothed particle hydrodynamics) methods are described. Then, the theoretical and numerical reliability of the SPH method is determined using simple cases. In particular, the number of neighbours significantly influences the preciseness of calculations. Also, the mesh of the structure must be adapted to the mesh of the fluid in order to reduce the edge effects. Finally, this study has shown that the values of artificial velocity coefficients used in the simulation of the BERDA test performed by the FZK Karlsruhe (Germany) are not correct. The domain of use of these coefficients was precised during a low speed impact. (J.S.)

  10. Seamless Method- and Model-based Software and Systems Engineering

    Science.gov (United States)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  11. Finite-element method modeling of hyper-frequency structures

    International Nuclear Information System (INIS)

    Zhang, Min

    1990-01-01

    The modelization of microwave propagation problems, including Eigen-value problem and scattering problem, is accomplished by the finite element method with vector functional and scalar functional. For Eigen-value problem, propagation modes in waveguides and resonant modes in cavities can be calculated in a arbitrarily-shaped structure with inhomogeneous material. Several microwave structures are resolved in order to verify the program. One drawback associated with the vector functional is the appearance of spurious or non-physical solutions. A penalty function method has been introduced to reduce spurious' solutions. The adaptive charge method is originally proposed in this thesis to resolve waveguide scattering problem. This method, similar to VSWR measuring technique, is more efficient to obtain the reflection coefficient than the matrix method. Two waveguide discontinuity structures are calculated by the two methods and their results are compared. The adaptive charge method is also applied to a microwave plasma excitor. It allows us to understand the role of different physical parameters of excitor in the coupling of microwave energy to plasma mode and the mode without plasma. (author) [fr

  12. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  13. A Method for Modeling of Floating Vertical Axis Wind Turbine

    DEFF Research Database (Denmark)

    Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir

    2013-01-01

    It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine...

  14. (Environmental and geophysical modeling, fracture mechanics, and boundary element methods)

    Energy Technology Data Exchange (ETDEWEB)

    Gray, L.J.

    1990-11-09

    Technical discussions at the various sites visited centered on application of boundary integral methods for environmental modeling, seismic analysis, and computational fracture mechanics in composite and smart'' materials. The traveler also attended the International Association for Boundary Element Methods Conference at Rome, Italy. While many aspects of boundary element theory and applications were discussed in the papers, the dominant topic was the analysis and application of hypersingular equations. This has been the focus of recent work by the author, and thus the conference was highly relevant to research at ORNL.

  15. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  16. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  17. Feature selection for MLP neural network: the use of random permutation of probabilistic outputs.

    Science.gov (United States)

    Yang, Jian-Bo; Shen, Kai-Quan; Ong, Chong-Jin; Li, Xiao-Ping

    2009-12-01

    This paper presents a new wrapper-based feature selection method for multilayer perceptron (MLP) neural networks. It uses a feature ranking criterion to measure the importance of a feature by computing the aggregate difference, over the feature space, of the probabilistic outputs of the MLP with and without the feature. Thus, a score of importance with respect to every feature can be provided using this criterion. Based on the numerical experiments on several artificial and real-world data sets, the proposed method performs, in general, better than several selected feature selection methods for MLP, particularly when the data set is sparse or has many redundant features. In addition, as a wrapper-based approach, the computational cost for the proposed method is modest.

  18. RF tunable devices and subsystems methods of modeling, analysis, and applications methods of modeling, analysis, and applications

    CERN Document Server

    Gu, Qizheng

    2015-01-01

    This book serves as a hands-on guide to RF tunable devices, circuits and subsystems. An innovative method of modeling for tunable devices and networks is described, along with a new tuning algorithm, adaptive matching network control approach, and novel filter frequency automatic control loop.  The author provides readers with the necessary background and methods for designing and developing tunable RF networks/circuits and tunable RF font-ends, with an emphasis on applications to cellular communications. ·      Discusses the methods of characterizing, modeling, analyzing, and applying RF tunable devices and subsystems; ·      Explains the necessary methods of utilizing RF tunable devices and subsystems, rather than discussing the RF tunable devices themselves; ·      Presents and applies methods for MEMS tunable capacitors, which can be used for any RF tunable device; ·      Uses analytic methods wherever possible and provides numerous, closed-form solutions; ·      Includ...

  19. Testing genotypes-phenotype relationships using permutation tests on association rules.

    Science.gov (United States)

    Shaikh, Mateen; Beyene, Joseph

    2015-02-01

    Association rule mining is a knowledge discovery technique which informs researchers about relationships between variables in data. These relationships can be focused to a specific set of response variables. We propose an augmented version of this method to discover groups of genotypes which relate to specific outcomes. We derive the methodology to find these candidate groups of genotypes and illustrate how the method works on data regarding neuroinvasive complications of West Nile virus and through simulation.

  20. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  1. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  2. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  3. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  4. Multigrid Methods for A Mixed Finite Element Method of The Darcy-Forchheimer Model.

    Science.gov (United States)

    Huang, Jian; Chen, Long; Rui, Hongxing

    2018-01-01

    An efficient nonlinear multigrid method for a mixed finite element method of the Darcy-Forchheimer model is constructed in this paper. A Peaceman-Rachford type iteration is used as a smoother to decouple the nonlinearity from the divergence constraint. The nonlinear equation can be solved element-wise with a closed formulae. The linear saddle point system for the constraint is reduced into a symmetric positive definite system of Poisson type. Furthermore an empirical choice of the parameter used in the splitting is proposed and the resulting multigrid method is robust to the so-called Forchheimer number which controls the strength of the nonlinearity. By comparing the number of iterations and CPU time of different solvers in several numerical experiments, our multigrid method is shown to convergent with a rate independent of the mesh size and the Forchheimer number and with a nearly linear computational cost.

  5. HyPEP FY06 Report: Models and Methods

    Energy Technology Data Exchange (ETDEWEB)

    DOE report

    2006-09-01

    The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations and many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the

  6. Chebyshev super spectral viscosity method for a fluidized bed model

    International Nuclear Information System (INIS)

    Sarra, Scott A.

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations

  7. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  8. Chebyshev super spectral viscosity method for a fluidized bed model

    CERN Document Server

    Sarra, S A

    2003-01-01

    A Chebyshev super spectral viscosity method and operator splitting are used to solve a hyperbolic system of conservation laws with a source term modeling a fluidized bed. The fluidized bed displays a slugging behavior which corresponds to shocks in the solution. A modified Gegenbauer postprocessing procedure is used to obtain a solution which is free of oscillations caused by the Gibbs-Wilbraham phenomenon in the spectral viscosity solution. Conservation is maintained by working with unphysical negative particle concentrations.

  9. Methods for landslide susceptibility modelling in Lower Austria

    Science.gov (United States)

    Bell, Rainer; Petschko, Helene; Glade, Thomas; Leopold, Philip; Heiss, Gerhard; Proske, Herwig; Granica, Klaus; Schweigl, Joachim; Pomaroli, Gilbert

    2010-05-01

    Landslide susceptibility modelling and implementation of the resulting maps is still a challenge for geoscientists, spatial and infrastructure planners. Particularly on a regional scale landslide processes and their dynamics are poorly understood. Furthermore, the availability of appropriate spatial data in high resolution is often a limiting factor for modelling high quality landslide susceptibility maps for large study areas. However, these maps form an important basis for preventive spatial planning measures. Thus, new methods have to be developed, especially focussing on the implementation of final maps into spatial planning processes. The main objective of the project "MoNOE" (Method development for landslide susceptibility modelling in Lower Austria) is to design a method for landslide susceptibility modelling for a large study area (about 10.200 km²) and to produce landslide susceptibility maps which are finally implemented in the spatial planning strategies of the Federal state of Lower Austria. The project focuses primarily on the landslide types fall and slide. To enable susceptibility modelling, landslide inventories for the respective landslide types must be compiled and relevant data has to be gathered, prepared and homogenized. Based on this data new methods must be developed to tackle the needs of the spatial planning strategies. Considerable efforts will also be spent on the validation of the resulting maps for each landslide type. A great challenge will be the combination of the susceptibility maps for slides and falls in just one single susceptibility map (which is requested by the government) and the definition of the final visualisation. Since numerous landslides have been favoured or even triggered by human impact, the human influence on landslides will also have to be investigated. Furthermore possibilities to integrate respective findings in regional susceptibility modelling will be explored. According to these objectives the project is

  10. Semi-Lagrangian methods in air pollution models

    Directory of Open Access Journals (Sweden)

    A. B. Hansen

    2011-06-01

    Full Text Available Various semi-Lagrangian methods are tested with respect to advection in air pollution modeling. The aim is to find a method fulfilling as many of the desirable properties by Rasch andWilliamson (1990 and Machenhauer et al. (2008 as possible. The focus in this study is on accuracy and local mass conservation.

    The methods tested are, first, classical semi-Lagrangian cubic interpolation, see e.g. Durran (1999, second, semi-Lagrangian cubic cascade interpolation, by Nair et al. (2002, third, semi-Lagrangian cubic interpolation with the modified interpolation weights, Locally Mass Conserving Semi-Lagrangian (LMCSL, by Kaas (2008, and last, semi-Lagrangian cubic interpolation with a locally mass conserving monotonic filter by Kaas and Nielsen (2010.

    Semi-Lagrangian (SL interpolation is a classical method for atmospheric modeling, cascade interpolation is more efficient computationally, modified interpolation weights assure mass conservation and the locally mass conserving monotonic filter imposes monotonicity.

    All schemes are tested with advection alone or with advection and chemistry together under both typical rural and urban conditions using different temporal and spatial resolution. The methods are compared with a current state-of-the-art scheme, Accurate Space Derivatives (ASD, see Frohn et al. (2002, presently used at the National Environmental Research Institute (NERI in Denmark. To enable a consistent comparison only non-divergent flow configurations are tested.

    The test cases are based either on the traditional slotted cylinder or the rotating cone, where the schemes' ability to model both steep gradients and slopes are challenged.

    The tests showed that the locally mass conserving monotonic filter improved the results significantly for some of the test cases, however, not for all. It was found that the semi-Lagrangian schemes, in almost every case, were not able to outperform the current ASD scheme

  11. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  12. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  13. Sparse aerosol models beyond the quadrature method of moments

    Science.gov (United States)

    McGraw, Robert

    2013-05-01

    This study examines a class of sparse aerosol models derived from linear programming (LP). The widely used quadrature method of moments (QMOM) is shown to fall into this class. Here it is shown how other sparse aerosol models can be constructed, which are not based on moments of the particle size distribution. The new methods enable one to bound atmospheric aerosol physical and optical properties using arbitrary combinations of model parameters and measurements. Rigorous upper and lower bounds, e.g. on the number of aerosol particles that can activate to form cloud droplets, can be obtained this way from measurement constraints that may include total particle number concentration and size distribution moments. The new LP-based methods allow a much wider range of aerosol properties, such as light backscatter or extinction coefficient, which are not easily connected to particle size moments, to also be assimilated into a list of constraints. Finally, it is shown that many of these more general aerosol properties can be tracked directly in an aerosol dynamics simulation, using SAMs, in much the same way that moments are tracked directly in the QMOM.

  14. The Quadrotor Dynamic Modeling and Indoor Target Tracking Control Method

    Directory of Open Access Journals (Sweden)

    Dewei Zhang

    2014-01-01

    Full Text Available A reliable nonlinear dynamic model of the quadrotor is presented. The nonlinear dynamic model includes actuator dynamic and aerodynamic effect. Since the rotors run near a constant hovering speed, the dynamic model is simplified at hovering operating point. Based on the simplified nonlinear dynamic model, the PID controllers with feedback linearization and feedforward control are proposed using the backstepping method. These controllers are used to control both the attitude and position of the quadrotor. A fully custom quadrotor is developed to verify the correctness of the dynamic model and control algorithms. The attitude of the quadrotor is measured by inertia measurement unit (IMU. The position of the quadrotor in a GPS-denied environment, especially indoor environment, is estimated from the downward camera and ultrasonic sensor measurements. The validity and effectiveness of the proposed dynamic model and control algorithms are demonstrated by experimental results. It is shown that the vehicle achieves robust vision-based hovering and moving target tracking control.

  15. Permutation invariant potential energy surfaces for polyatomic reactions using atomistic neural networks

    International Nuclear Information System (INIS)

    Kolb, Brian; Zhao, Bin; Guo, Hua; Li, Jun; Jiang, Bin

    2016-01-01

    The applicability and accuracy of the Behler-Parrinello atomistic neural network method for fitting reactive potential energy surfaces is critically examined in three systems, H + H 2 → H 2 + H, H + H 2 O → H 2 + OH, and H + CH 4 → H 2 + CH 3 . A pragmatic Monte Carlo method is proposed to make efficient choice of the atom-centered mapping functions. The accuracy of the potential energy surfaces is not only tested by fitting errors but also validated by direct comparison in dynamically important regions and by quantum scattering calculations. Our results suggest this method is both accurate and efficient in representing multidimensional potential energy surfaces even when dissociation continua are involved.

  16. Thermal Modeling Method Improvements for SAGE III on ISS

    Science.gov (United States)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  17. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    vot reveal any adverse behaviour. In fact the observed traffic seemed very close to what would be expected from Poisson traffic. The Changeover/Changeback procedure in SS7, which is used to redirect traffic in case of link failure, has been analyzed. The transient behaviour during a Changeover...... scenario was modelled using Markovian models. The Ordinary Differential Equations arising from these models were solved numerically. The results obtained seemed very similar to those obtained using a different method in previous work by Akinpelu & Skoog 1985. Recent measurement studies of packet traffic...... is found by noting the close relationship with the expressions for the corresponding infinite queue. For the special case of a batch Poisson arrival process this observation makes it possible to express the queue length at an arbitrary in terms of the corresponding queue lengths for the infinite case....

  18. Methods to model-check parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O. S.; McCune, W.; Lusk, E.

    2003-01-01

    We report on an effort to develop methodologies for formal verification of parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of communicating processes. While the individual components of the collection execute simple algorithms, their interaction leads to unexpected errors that are difficult to uncover by conventional means. Two verification approaches are discussed here: the standard model checking approach using the software model checker SPIN and the nonstandard use of a general-purpose first-order resolution-style theorem prover OTTER to conduct the traditional state space exploration. We compare modeling methodology and analyze performance and scalability of the two methods with respect to verification of MPD

  19. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  20. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  1. Image to Point Cloud Method of 3D-MODELING

    Science.gov (United States)

    Chibunichev, A. G.; Galakhov, V. P.

    2012-07-01

    This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  2. A novel duplicate images detection method based on PLSA model

    Science.gov (United States)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  3. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  4. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  5. Multicomponent gas mixture air bearing modeling via lattice Boltzmann method

    Science.gov (United States)

    Tae Kim, Woo; Kim, Dehee; Hari Vemuri, Sesha; Kang, Soo-Choon; Seung Chung, Pil; Jhon, Myung S.

    2011-04-01

    As the demand for ultrahigh recording density increases, development of an integrated head disk interface (HDI) modeling tool, which considers the air bearing and lubricant film morphology simultaneously is of paramount importance. To overcome the shortcomings of the existing models based on the modified Reynolds equation (MRE), the lattice Boltzmann method (LBM) is a natural choice in modeling high Knudsen number (Kn) flows owing to its advantages over conventional methods. The transient and parallel nature makes this LBM an attractive tool for the next generation air bearing design. Although LBM has been successfully applied to single component systems, a multicomponent system analysis has been thwarted because of the complexity in coupling the terms for each component. Previous studies have shown good results in modeling immiscible component mixtures by use of an interparticle potential. In this paper, we extend our LBM model to predict the flow rate of high Kn pressure-driven flows in multicomponent gas mixture air bearings, such as the air-helium system. For accurate modeling of slip conditions near the wall, we adopt our LBM scheme with spatially dependent relaxation times for air bearings in HDIs. To verify the accuracy of our code, we tested our scheme via simple two-dimensional benchmark flows. In the pressure-driven flow of an air-helium mixture, we found that the simple linear combination of pure helium and pure air flow rates, based on helium and air mole fraction, gives considerable error when compared to our LBM calculation. Hybridization with the existing MRE database can be adopted with the procedure reported here to develop the state-of-the-art slider design software.

  6. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  7. Setting at point on critical assembly of modelling methods for fast neutron power reactors

    International Nuclear Information System (INIS)

    Zhukov, A.V.; Kazanskij, Y.A.; Kochetkov, A.L.; Matveev, V.I.; Mironovich, Y.N.

    1986-01-01

    In this report the authors examine two modelling methods. In the first method the model presents faithfully the flux distribution. In the second method the reactor model is made by a central mixed oxide fuel surrounded with uranium [fr

  8. IMPROVED NUMERICAL METHODS FOR MODELING RIVER-AQUIFER INTERACTION.

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sue Tillery; Phillip King

    2008-09-01

    A new option for Local Time-Stepping (LTS) was developed to use in conjunction with the multiple-refined-area grid capability of the U.S. Geological Survey's (USGS) groundwater modeling program, MODFLOW-LGR (MF-LGR). The LTS option allows each local, refined-area grid to simulate multiple stress periods within each stress period of a coarser, regional grid. This option is an alternative to the current method of MF-LGR whereby the refined grids are required to have the same stress period and time-step structure as the coarse grid. The MF-LGR method for simulating multiple-refined grids essentially defines each grid as a complete model, then for each coarse grid time-step, iteratively runs each model until the head and flux changes at the interfacing boundaries of the models are less than some specified tolerances. Use of the LTS option is illustrated in two hypothetical test cases consisting of a dual well pumping system and a hydraulically connected stream-aquifer system, and one field application. Each of the hypothetical test cases was simulated with multiple scenarios including an LTS scenario, which combined a monthly stress period for a coarse grid model with a daily stress period for a refined grid model. The other scenarios simulated various combinations of grid spacing and temporal refinement using standard MODFLOW model constructs. The field application simulated an irrigated corridor along the Lower Rio Grande River in New Mexico, with refinement of a small agricultural area in the irrigated corridor.The results from the LTS scenarios for the hypothetical test cases closely replicated the results from the true scenarios in the refined areas of interest. The head errors of the LTS scenarios were much smaller than from the other scenarios in relation to the true solution, and the run times for the LTS models were three to six times faster than the true models for the dual well and stream-aquifer test cases, respectively. The results of the field

  9. Improved biocatalysts from a synthetic circular permutation library of the flavin-dependent oxidoreductase old yellow enzyme.

    Science.gov (United States)

    Daugherty, Ashley B; Govindarajan, Sridhar; Lutz, Stefan

    2013-09-25

    Members of the old yellow enzyme (OYE) family are widely used, effective biocatalysts for the stereoselective trans-hydrogenation of activated alkenes. To further expand their substrate scope and improve catalytic performance, we have applied a protein engineering strategy called circular permutation (CP) to enhance the function of OYE1 from Saccharomyces pastorianus. CP can influence a biocatalyst's function by altering protein backbone flexibility and active site accessibility, both critical performance features because the catalytic cycle for OYE1 is thought to involve rate-limiting conformational changes. To explore the impact of CP throughout the OYE1 protein sequence, we implemented a highly efficient approach for cell-free cpOYE library preparation by combining whole-gene synthesis with in vitro transcription/translation. The versatility of such an ex vivo system was further demonstrated by the rapid and reliable functional evaluation of library members under variable environmental conditions with three reference substrates ketoisophorone, cinnamaldehyde, and (S)-carvone. Library analysis identified over 70 functional OYE1 variants with several biocatalysts exhibiting over an order of magnitude improved catalytic activity. Although catalytic gains of individual cpOYE library members vary by substrate, the locations of new protein termini in functional variants for all tested substates fall within the same four distinct loop/lid regions near the active site. Our findings demonstrate the importance of these structural elements in enzyme function and support the hypothesis of conformational flexibility as a limiting factor for catalysis in wild type OYE.

  10. Modeling of Unsteady Flow through the Canals by Semiexact Method

    Directory of Open Access Journals (Sweden)

    Farshad Ehsani

    2014-01-01

    Full Text Available The study of free-surface and pressurized water flows in channels has many interesting application, one of the most important being the modeling of the phenomena in the area of natural water systems (rivers, estuaries as well as in that of man-made systems (canals, pipes. For the development of major river engineering projects, such as flood prevention and flood control, there is an essential need to have an instrument that be able to model and predict the consequences of any possible phenomenon on the environment and in particular the new hydraulic characteristics of the system. The basic equations expressing hydraulic principles were formulated in the 19th century by Barre de Saint Venant and Valentin Joseph Boussinesq. The original hydraulic model of the Saint Venant equations is written in the form of a system of two partial differential equations and it is derived under the assumption that the flow is one-dimensional, the cross-sectional velocity is uniform, the streamline curvature is small and the pressure distribution is hydrostatic. The St. Venant equations must be solved with continuity equation at the same time. Until now no analytical solution for Saint Venant equations is presented. In this paper the Saint Venant equations and continuity equation are solved with homotopy perturbation method (HPM and comparison by explicit forward finite difference method (FDM. For decreasing the present error between HPM and FDM, the st.venant equations and continuity equation are solved by HAM. The homotopy analysis method (HAM contains the auxiliary parameter ħ that allows us to adjust and control the convergence region of solution series. The study has highlighted the efficiency and capability of HAM in solving Saint Venant equations and modeling of unsteady flow through the rectangular canal that is the goal of this paper and other kinds of canals.

  11. Permutation Test Approach for Ordered Alternatives in Randomized Complete Block Design: A Comparative Study

    OpenAIRE

    GOKPINAR, Esra; GUL, Hasan; GOKPINAR, Fikri; BAYRAK, Hülya; OZONUR, Deniz

    2013-01-01

    Randomized complete block design is one of the most used experimental designs in statistical analysis. For testing ordered alternatives in randomized complete block design, parametric tests are used if random sample are drawn from Normal distribution. If normality assumption is not provide, nonparametric methods are used. In this study, we are interested nonparametric tests and we introduce briefly the nonparametric tests, such as Page, Modified Page and Hollander tests. We also give Permutat...

  12. Modeling methods for merging computational and experimental aerodynamic pressure data

    Science.gov (United States)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT

  13. Methods for the development of in silico GPCR models

    Science.gov (United States)

    Morales, Paula; Hurst, Dow P.; Reggio, Patricia H.

    2018-01-01

    The Reggio group has constructed computer models of the inactive and G-protein activated states of the cannabinoid CB1 and CB2 receptors, as well, several orphan receptors that recognize a sub-set of cannabinoid compounds, including GPR55 and GPR18. These models have been used to design ligands, mutations and covalent labeling studies. The resultant second generation models have been used to design ligands with improved affinity, efficacy and sub-type selectivity. Herein, we provide a guide for the development of GPCR models using the most recent orphan receptor studied in our lab, GPR3. GPR3 is an orphan receptor that belongs to the Class A family of G-Protein Coupled Receptors. It shares high sequence similarity with GPR6, GPR12, the lysophospholipid receptors, and the cannabinoid receptors. GPR3 is predominantly expressed in mammalian brain and oocytes and it is known as a Gαs-coupled receptor activated constitutively in cells. GPR3 represents a possible target for the treatment of different pathological conditions such as Alzheimer’s disease, oocyte maturation or neuropathic pain. However, the lack of potent and selective GPR3 ligands is delaying the exploitation of this promising therapeutic target. In this context, we aim to develop a homology model that helps us to elucidate the structural determinants governing ligand-receptor interactions at GPR3. In this chapter, we detail the methods and rationale behind the construction of the GPR3 active and inactive state models. These homology models will enable the rational design of novel ligands, which may serve as research tools for further understanding of the biological role of GPR3. PMID:28750813

  14. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  15. Pursuing the method of multiple working hypotheses for hydrological modeling

    Science.gov (United States)

    Clark, M. P.; Kavetski, D.; Fenicia, F.

    2012-12-01

    Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in measuring and representing the heterogeneity encountered in natural systems. This presentation advocates using the method of multiple working hypotheses for systematic and stringent testing of model alternatives in hydrology. We discuss how the multiple hypothesis approach provides the flexibility to formulate alternative representations (hypotheses) describing both individual processes and the overall system. When combined with incisive diagnostics to scrutinize multiple model representations against observed data, this provides hydrologists with a powerful and systematic approach for model development and improvement. Multiple hypothesis frameworks also support a broader coverage of the model hypothesis space and hence improve the quantification of predictive uncertainty arising from system and component non-identifiabilities. As part of discussing the advantages and limitations of multiple hypothesis frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting different types of data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from major uncertainties in environmental data, quantifying regional differences in dominant hydrological processes, and the grander challenge of understanding the self-organization and optimality principles that may functionally explain and describe the heterogeneities evident in most environmental systems. We assess recent progress in these research directions, and how new advances are possible using multiple hypothesis

  16. Investigating the performance of directional boundary layer model through staged modeling method

    Science.gov (United States)

    Jeong, Moon-Gyu; Lee, Won-Chan; Yang, Seung-Hune; Jang, Sung-Hoon; Shim, Seong-Bo; Kim, Young-Chang; Suh, Chun-Suk; Choi, Seong-Woon; Kim, Young-Hee

    2011-04-01

    Generally speaking, the models used in the optical proximity effect correction (OPC) can be divided into three parts, mask part, optic part, and resist part. For the excellent quality of the OPC model, each part has to be described by the first principles. However, OPC model can't take the all of the principles since it should cover the full chip level calculation during the correction. Moreover, the calculation has to be done iteratively during the correction until the cost function we want to minimize converges. Normally the optic part in OPC model is described with the sum of coherent system (SOCS[1]) method. Thanks to this method we can calculate the aerial image so fast without the significant loss of accuracy. As for the resist part, the first principle is too complex to implement in detail, so it is normally expressed in a simple way, such as the approximation of the first principles, and the linear combinations of factors which is highly correlated with the chemistries in the resist. The quality of this kind of the resist model depends on how well we train the model through fitting to the empirical data. The most popular way of making the mask function is based on the Kirchhoff's thin mask approximation. This method works well when the feature size on the mask is sufficiently large, but as the line width of the semiconductor circuit becomes smaller, this method causes significant error due to the mask topography effect. To consider the mask topography effect accurately, we have to use rigorous methods of calculating the mask function, such as finite difference time domain (FDTD[2]) and rigorous coupled-wave analysis (RCWA[3]). But these methods are too time-consuming to be used as a part of the OPC model. Until now many alternatives have been suggested as the efficient way of considering the mask topography effect. Among them we focused on the boundary layer model (BLM) in this paper. We mainly investigated the way of optimization of the parameters for the

  17. Comparative examination of two methods for modeling autoimmune uveitis

    Directory of Open Access Journals (Sweden)

    Svetlana V. Aksenova

    2017-09-01

    Full Text Available Introduction: Uveitis is a disease of the uveal tract, characterized by a variety of causes and clinical manifestations. The internal antigens prevail often in the pathogenesis of the disease and develop the so-called autoimmune reactions. The uveitis treatment has an important medico-social significance because of the high prevalence of uveitis, the significant rate of the disease in young people, and high disability. The article compares the efficiency of two methods for modeling autoimmune uveitis. Materials and Methods: The research was conducted on 6 rabbits of the Chinchilla breed (12 eyes. Two models of experimental uveitis were reproduced on rabbits using normal horse serum during the research. A clinical examination of the inflammatory process course in the eyes was carried out by biomicroscopy using a slit lamp, and a direct ophthalmoscope. Histological and immunological examinations were conducted by the authors of the article. Results: The faster-reproducing and vivid clinical picture of the disease was observed in the second group. The obvious changes in the immunological status of the animals were noted also: an increase in the number of leukocytes, neutrophils, HCT-active neutrophils, and activation of phagocytosis. Discussion and Conclusions: The research has showed that the second model of uveitis is the most convenient working variant, which is characterized by high activity and duration of the inflammatory process in the eye.

  18. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  19. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  20. Dimensionality reduction method based on a tensor model

    Science.gov (United States)

    Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng

    2017-04-01

    Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.

  1. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives.

    Science.gov (United States)

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-04-01

    Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the 'change-in-estimate' (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  2. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  3. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  4. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  5. `Dem DEMs: Comparing Methods of Digital Elevation Model Creation

    Science.gov (United States)

    Rezza, C.; Phillips, C. B.; Cable, M. L.

    2017-12-01

    Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its

  6. HIDING TEXT IN DIGITAL IMAGES USING PERMUTATION ORDERING AND COMPACT KEY BASED DICTIONARY

    Directory of Open Access Journals (Sweden)

    Nagalinga Rajan

    2017-05-01

    Full Text Available Digital image steganography is an emerging technique in secure communication for the modern connected world. It protects the content of the message without arousing suspicion in a passive observer. A novel steganography method is presented to hide text in digital images. A compact dictionary is designed to efficiently communicate all types of secret messages. The sorting order of pixels in image blocks are chosen as the carrier of embedded information. The high correlation in image pixel values means reordering within image blocks do not cause high distortion. The image is divided into blocks and perturbed to create non repeating sequences of intensity values. These values are then sorted according to the message. At the receiver end, the message is read from the sorting order of the pixels in image blocks. Only those image blocks with standard deviation lesser than a given threshold are chosen for embedding to alleviate visual distortion. Information Security is provided by shuffling the dictionary according to a shared key. Experimental Results and Analysis show that the method is capable of hiding text with more than 4000 words in a 512×512 grayscale image with a peak signal to noise ratio above 40 decibels.

  7. Numerical simulations of multicomponent ecological models with adaptive methods.

    Science.gov (United States)

    Owolabi, Kolade M; Patidar, Kailash C

    2016-01-08

    The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that

  8. Towards tricking a pathogen's protease into fighting infection: the 3D structure of a stable circularly permuted onconase variant cleavedby HIV-1 protease.

    Directory of Open Access Journals (Sweden)

    Mariona Callís

    Full Text Available Onconase® is a highly cytotoxic amphibian homolog of Ribonuclease A. Here, we describe the construction of circularly permuted Onconase® variants by connecting the N- and C-termini of this enzyme with amino acid residues that are recognized and cleaved by the human immunodeficiency virus protease. Uncleaved circularly permuted Onconase® variants are unusually stable, non-cytotoxic and can internalize in human T-lymphocyte Jurkat cells. The structure, stability and dynamics of an intact and a cleaved circularly permuted Onconase® variant were determined by Nuclear Magnetic Resonance spectroscopy and provide valuable insight into the changes in catalytic efficiency caused by the cleavage. The understanding of the structural environment and the dynamics of the activation process represents a first step toward the development of more effective drugs for the treatment of diseases related to pathogens expressing a specific protease. By taking advantage of the protease's activity to initiate a cytotoxic cascade, this approach is thought to be less susceptible to known resistance mechanisms.

  9. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  10. Lie Markov models.

    Science.gov (United States)

    Sumner, J G; Fernández-Sánchez, J; Jarvis, P D

    2012-04-07

    Recent work has discussed the importance of multiplicative closure for the Markov models used in phylogenetics. For continuous-time Markov chains, a sufficient condition for multiplicative closure of a model class is ensured by demanding that the set of rate-matrices belonging to the model class form a Lie algebra. It is the case that some well-known Markov models do form Lie algebras and we refer to such models as "Lie Markov models". However it is also the case that some other well-known Markov models unequivocally do not form Lie algebras (GTR being the most conspicuous example). In this paper, we will discuss how to generate Lie Markov models by demanding that the models have certain symmetries under nucleotide permutations. We show that the Lie Markov models include, and hence provide a unifying concept for, "group-based" and "equivariant" models. For each of two and four character states, the full list of Lie Markov models with maximal symmetry is presented and shown to include interesting examples that are neither group-based nor equivariant. We also argue that our scheme is pleasing in the context of applied phylogenetics, as, for a given symmetry of nucleotide substitution, it provides a natural hierarchy of models with increasing number of parameters. We also note that our methods are applicable to any application of continuous-time Markov chains beyond the initial motivations we take from phylogenetics. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  11. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  12. Tail modeling in a stretched magnetosphere. I - Methods and transformations

    Science.gov (United States)

    Stern, David P.

    1987-01-01

    A new method is developed for representing the magnetospheric field B as a distorted dipole field. Because Delta-B = 0 must be maintained, such a distortion may be viewed as a transformation of the vector potential A. The simplest form is a one-dimensional 'stretch transformation' along the x axis, concisely represented by the 'stretch function' f(x), which is also a convenient tool for representing features of the substorm cycle. One-dimensional stretch transformations are extended to spherical, cylindrical, and parabolic coordinates and then to arbitrary coordinates. It is shown that distortion transformations can be viewed as mappings of field lines from one pattern to another; the final result only requires knowledge of the field and not of the potentials. General transformations in Cartesian and arbitrary coordinates are derived, and applications to field modeling, field line motion, MHD modeling, and incompressible fluid dynamics are considered.

  13. Engineering models and methods for industrial cell control

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk; Alting, Leo

    1997-01-01

    This paper is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implemen-tation of computer based...... control and monitor-ing systems for production cells. The project participants are The Danish Academy of Technical Sciences, the Institute of Manufacturing Engineering at the Technical University of Denmark and ODENSE STEEL SHIPYARD Ltd.The manufacturing environment and the current practice...... for engineering of cell control systems has been analysed as well as automation software enablers. A number of problems related to these issues are identified.In order to support engineering of cell control systems by the use of enablers, a generic cell control data model and an architecture has been defined...

  14. Computational methods of the Advanced Fluid Dynamics Model

    International Nuclear Information System (INIS)

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development

  15. Scattering of surface waves modelled by the integral equation method

    Science.gov (United States)

    Lu, Laiyu; Maupin, Valerie; Zeng, Rongsheng; Ding, Zhifeng

    2008-09-01

    The integral equation method is used to model the propagation of surface waves in 3-D structures. The wavefield is represented by the Fredholm integral equation, and the scattered surface waves are calculated by solving the integral equation numerically. The integration of the Green's function elements is given analytically by treating the singularity of the Hankel function at R = 0, based on the proper expression of the Green's function and the addition theorem of the Hankel function. No far-field and Born approximation is made. We investigate the scattering of surface waves propagating in layered reference models imbedding a heterogeneity with different density, as well as Lamé constant contrasts, both in frequency and time domains, for incident plane waves and point sources.

  16. A model for explaining fusion suppression using classical trajectory method

    Directory of Open Access Journals (Sweden)

    Phookan C. K.

    2015-01-01

    Full Text Available We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.

  17. Modeling of Cracked Beams by the Experimental Design Method

    Directory of Open Access Journals (Sweden)

    M. Serier

    Full Text Available Abstract The understanding of phenomena, no matter their nature is based on the experimental results found. In the most cases, this requires an important number of tests in order to put a reliable and useful observation served into solving the technical problems subsequently. This paper is based on independent and variables combination resulting from experimentation in a mathematical formulation. Indeed, mathematical modeling gives us the advantage to optimize and predict the right choices without passing each case by the experiment. In this work we plan to apply the experimental design method on the experimental results found by (Deokar, A, 2011, concerning the effect of the size and position of a crack on the measured frequency of a beam console, and validating the mathematical model to predict other frequencies

  18. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A mathematical model and numerical method for thermoelectric DNA sequencing

    Science.gov (United States)

    Shi, Liwei; Guilbeau, Eric J.; Nestorova, Gergana; Dai, Weizhong

    2014-05-01

    Single nucleotide polymorphisms (SNPs) are single base pair variations within the genome that are important indicators of genetic predisposition towards specific diseases. This study explores the feasibility of SNP detection using a thermoelectric sequencing method that measures the heat released when DNA polymerase inserts a deoxyribonucleoside triphosphate into a DNA strand. We propose a three-dimensional mathematical model that governs the DNA sequencing device with a reaction zone that contains DNA template/primer complex immobilized to the surface of the lower channel wall. The model is then solved numerically. Concentrations of reactants and the temperature distribution are obtained. Results indicate that when the nucleoside is complementary to the next base in the DNA template, polymerization occurs lengthening the complementary polymer and releasing thermal energy with a measurable temperature change, implying that the thermoelectric conceptual device for sequencing DNA may be feasible for identifying specific genes in individuals.

  20. Modeling of electromigration salt removal methods in building materials

    DEFF Research Database (Denmark)

    Johannesson, Björn; Ottosen, Lisbeth M.

    2008-01-01

    and the effect of the composition of the ionic constituents on the overall behavior of the salt removal process. The model is obtained by assigning a Fick’s law type of assumption for each ionic species considered and also assuming that all ions is effected by the applied external electrical field in accordance...... with its ionic mobility properties. It is, further, assumed that Gauss’s law can be used to calculate the internal electrical field induced by the diffusion it self. In this manner the external electrical field applied can be modeled, simply, by assigning proper boundary conditions for the equation...... calculating the electrical field. A tailor made finite element code is written capable of solving the transient non-linear coupled set of differential equations numerically. A truly implicit time integration scheme is used together with a modified Newton-Raphson method to tackle the non...