WorldWideScience

Sample records for model minimal permutations

  1. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan

    International Nuclear Information System (INIS)

    Lian Zhigang; Gu Xingsheng; Jiao Bin

    2008-01-01

    It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan

  2. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  3. The Structure of a Thermophilic Kinase Shapes Fitness upon Random Circular Permutation.

    Science.gov (United States)

    Jones, Alicia M; Mehta, Manan M; Thomas, Emily E; Atkinson, Joshua T; Segall-Shapiro, Thomas H; Liu, Shirley; Silberg, Jonathan J

    2016-05-20

    Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement in which native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein's functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AKs with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and it reveals a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection.

  4. Permutations

    International Nuclear Information System (INIS)

    Arnold, Vladimir I

    2009-01-01

    Decompositions into cycles for random permutations of a large number of elements are very different (in their statistics) from the same decompositions for algebraic permutations (defined by linear or projective transformations of finite sets). This paper presents tables giving both these and other statistics, as well as a comparison of them with the statistics of involutions or permutations with all their cycles of even length. The inclusions of a point in cycles of various lengths turn out to be equiprobable events for random permutations. The number of permutations of 2N elements with all cycles of even length turns out to be the square of an integer (namely, of (2N-1)!!). The number of cycles of projective permutations (over a field with an odd prime number of elements) is always even. These and other empirically discovered theorems are proved in the paper. Bibliography: 6 titles.

  5. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  6. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  7. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...

  8. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  9. Tensor models, Kronecker coefficients and permutation centralizer algebras

    Science.gov (United States)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  10. PERMutation Using Transposase Engineering (PERMUTE): A Simple Approach for Constructing Circularly Permuted Protein Libraries.

    Science.gov (United States)

    Jones, Alicia M; Atkinson, Joshua T; Silberg, Jonathan J

    2017-01-01

    Rearrangements that alter the order of a protein's sequence are used in the lab to study protein folding, improve activity, and build molecular switches. One of the simplest ways to rearrange a protein sequence is through random circular permutation, where native protein termini are linked together and new termini are created elsewhere through random backbone fission. Transposase mutagenesis has emerged as a simple way to generate libraries encoding different circularly permuted variants of proteins. With this approach, a synthetic transposon (called a permuteposon) is randomly inserted throughout a circularized gene to generate vectors that express different permuted variants of a protein. In this chapter, we outline the protocol for constructing combinatorial libraries of circularly permuted proteins using transposase mutagenesis, and we describe the different permuteposons that have been developed to facilitate library construction.

  11. Permutation-invariant distance between atomic configurations

    Science.gov (United States)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-09-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.

  12. Permutation-invariant distance between atomic configurations

    International Nuclear Information System (INIS)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-01-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

  13. Invalid Permutation Tests

    Directory of Open Access Journals (Sweden)

    Mikel Aickin

    2010-01-01

    Full Text Available Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

  14. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    Science.gov (United States)

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  15. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  16. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    Science.gov (United States)

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  17. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  18. Infinite permutations vs. infinite words

    Directory of Open Access Journals (Sweden)

    Anna E. Frid

    2011-08-01

    Full Text Available I am going to compare well-known properties of infinite words with those of infinite permutations, a new object studied since middle 2000s. Basically, it was Sergey Avgustinovich who invented this notion, although in an early study by Davis et al. permutations appear in a very similar framework as early as in 1977. I am going to tell about periodicity of permutations, their complexity according to several definitions and their automatic properties, that is, about usual parameters of words, now extended to permutations and behaving sometimes similarly to those for words, sometimes not. Another series of results concerns permutations generated by infinite words and their properties. Although this direction of research is young, many people, including two other speakers of this meeting, have participated in it, and I believe that several more topics for further study are really promising.

  19. A complete classification of minimal non-PS-groups

    Indian Academy of Sciences (India)

    Abstract. Let G be a finite group. A subgroup H of G is called s-permutable in G if it permutes with every Sylow subgroup of G, and G is called a PS-group if all minimal subgroups and cyclic subgroups with order 4 of G are s-permutable in G. In this paper, we give a complete classification of finite groups which are not ...

  20. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  1. Permutation orbifolds and chaos

    NARCIS (Netherlands)

    Belin, A.

    2017-01-01

    We study out-of-time-ordered correlation functions in permutation orbifolds at large central charge. We show that they do not decay at late times for arbitrary choices of low-dimension operators, indicating that permutation orbifolds are non-chaotic theories. This is in agreement with the fact they

  2. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  3. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  4. Sorting permutations by prefix and suffix rearrangements.

    Science.gov (United States)

    Lintzmayer, Carla Negri; Fertin, Guillaume; Dias, Zanoni

    2017-02-01

    Some interesting combinatorial problems have been motivated by genome rearrangements, which are mutations that affect large portions of a genome. When we represent genomes as permutations, the goal is to transform a given permutation into the identity permutation with the minimum number of rearrangements. When they affect segments from the beginning (respectively end) of the permutation, they are called prefix (respectively suffix) rearrangements. This paper presents results for rearrangement problems that involve prefix and suffix versions of reversals and transpositions considering unsigned and signed permutations. We give 2-approximation and ([Formula: see text])-approximation algorithms for these problems, where [Formula: see text] is a constant divided by the number of breakpoints (pairs of consecutive elements that should not be consecutive in the identity permutation) in the input permutation. We also give bounds for the diameters concerning these problems and provide ways of improving the practical results of our algorithms.

  5. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  6. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  7. 1-Colored Archetypal Permutations and Strings of Degree n

    Directory of Open Access Journals (Sweden)

    Gheorghe Eduard Tara

    2012-10-01

    Full Text Available New notions related to permutations are introduced here. We present the string of a 1-colored permutation as a closed planar curve, the fundamental 1-colored permutation as an equivalence class related to the equivalence in strings of the 1-colored permutations. We give formulas for the number of the 1-colored archetypal permutations of degree n. We establish an algorithm to identify the 1- colored archetypal permutations of degree n and we present the atlas of the 1-colored archetypal strings of degree n, n ≤ 7, based on this algorithm.

  8. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  9. Tensor Permutation Matrices in Finite Dimensions

    OpenAIRE

    Christian, Rakotonirina

    2005-01-01

    We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...

  10. Sorting signed permutations by short operations.

    Science.gov (United States)

    Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni

    2015-01-01

    During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show

  11. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  12. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  13. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  14. A transposase strategy for creating libraries of circularly permuted proteins.

    Science.gov (United States)

    Mehta, Manan M; Liu, Shirley; Silberg, Jonathan J

    2012-05-01

    A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions.

  15. Defects and permutation branes in the Liouville field theory

    DEFF Research Database (Denmark)

    Sarkissian, Gor

    2009-01-01

    The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families.......The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families....

  16. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    Science.gov (United States)

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  18. Permutation parity machines for neural cryptography.

    Science.gov (United States)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  19. Permutation parity machines for neural cryptography

    International Nuclear Information System (INIS)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-01-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  20. FORMULASI MODEL PERMUTASI SIKLIS DENGAN OBJEK MULTINOMIAL

    Directory of Open Access Journals (Sweden)

    Sukma Adi Perdana

    2016-10-01

    Full Text Available Penelitian ini bertujuan membangun model matematika untuk menghitung jumlah susunan objek dari permutasi siklis yang memiliki objek multinomial. Model yang dibangun dibatasi untuk permutasi siklis yang memiliki objek multinomial dengan minimal ada satu jenis objek beranggotakan tunggal. Pemodelan dilakukan berdasarkan struktur matematika dari permutasi siklis dan permutasi multinomial. Model permutasi siklis yang memiliki objek multinomial telah dirumuskan.   Pembuktian model telah dilakukan melalui validasi struktur serta validasi hasil yang dilakukan dengan cara membandingkan hasil perhitungan model dan hasil pencacahan. Teorema tentang permutasi siklis dengan objek multinomial juga telah dibangun. Kata kunci:  pemodelan , permutasi siklis, permutasi multinomial This study aims at constructing mathematical model to count the number of arrangement of objects form cyclical permutation that has multinomial objects. The model constructed is limited to cyclical permutation that has multinomial object in which at least one kind of object having single cardinality is contained within. Modelling is undertaken based on mathematical structure of cyclical permutation and multinomial permutation. Cyclical permutation model having multinomial object has been formulated as . The proof of the model has been undertaken by validating structure and validating the outcome which was conducted by comparing counting result of model and counting result manually. The theorem of cyclical permutation with multinomial object has also been developed. Keywords: modelling, cyclical permutation, multinomial permutation

  1. Permutations of massive vacua

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine [Department of Physics, Universidad de Oviedo, Avenida Calvo Sotelo 18, 33007 Oviedo (Spain); Troost, Jan [Laboratoire de Physique Théorique de l’É cole Normale Supérieure, CNRS,PSL Research University, Sorbonne Universités, 75005 Paris (France)

    2017-05-09

    We discuss the permutation group G of massive vacua of four-dimensional gauge theories with N=1 supersymmetry that arises upon tracing loops in the space of couplings. We concentrate on superconformal N=4 and N=2 theories with N=1 supersymmetry preserving mass deformations. The permutation group G of massive vacua is the Galois group of characteristic polynomials for the vacuum expectation values of chiral observables. We provide various techniques to effectively compute characteristic polynomials in given theories, and we deduce the existence of varying symmetry breaking patterns of the duality group depending on the gauge algebra and matter content of the theory. Our examples give rise to interesting field extensions of spaces of modular forms.

  2. A chronicle of permutation statistical methods 1920–2000, and beyond

    CERN Document Server

    Berry, Kenneth J; Mielke Jr , Paul W

    2014-01-01

    The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...

  3. Permutations avoiding an increasing number of length-increasing forbidden subsequences

    Directory of Open Access Journals (Sweden)

    Elena Barcucci

    2000-12-01

    Full Text Available A permutation π is said to be τ-avoiding if it does not contain any subsequence having all the same pairwise comparisons as τ. This paper concerns the characterization and enumeration of permutations which avoid a set F j of subsequences increasing both in number and in length at the same time. Let F j be the set of subsequences of the form σ(j+1(j+2, σ being any permutation on {1,...,j}. For j=1 the only subsequence in F 1 is 123 and the 123-avoiding permutations are enumerated by the Catalan numbers; for j=2 the subsequences in F 2 are 1234 2134 and the (1234,2134 avoiding permutations are enumerated by the Schröder numbers; for each other value of j greater than 2 the subsequences in F j are j! and their length is (j+2 the permutations avoiding these j! subsequences are enumerated by a number sequence {a n } such that C n ≤ a n ≤ n!, C n being the n th Catalan number. For each j we determine the generating function of permutations avoiding the subsequences in F j according to the length, to the number of left minima and of non-inversions.

  4. Patterns in Permutations and Words

    CERN Document Server

    Kitaev, Sergey

    2011-01-01

    There has been considerable interest recently in the subject of patterns in permutations and words, a new branch of combinatorics with its roots in the works of Rotem, Rogers, and Knuth in the 1970s. Consideration of the patterns in question has been extremely interesting from the combinatorial point of view, and it has proved to be a useful language in a variety of seemingly unrelated problems, including the theory of Kazhdan--Lusztig polynomials, singularities of Schubert varieties, interval orders, Chebyshev polynomials, models in statistical mechanics, and various sorting algorithms, inclu

  5. An AUC-based permutation variable importance measure for random forests.

    Science.gov (United States)

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  6. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  7. Some topics on permutable subgroups in infinite groups

    OpenAIRE

    Ialenti, Roberto

    2017-01-01

    The aim of this thesis is to study permutability in different aspects of the theory of infinite groups. In particular, it will be studied the structure of groups in which all the members of a relevant system of subgroups satisfy a suitable generalized condition of permutability.

  8. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  9. Circular Permutation of a Chaperonin Protein: Biophysics and Application to Nanotechnology

    Science.gov (United States)

    Paavola, Chad; Chan, Suzanne; Li, Yi-Fen; McMillan, R. Andrew; Trent, Jonathan

    2004-01-01

    We have designed five circular permutants of a chaperonin protein derived from the hyperthermophilic organism Sulfolobus shibatae. These permuted proteins were expressed in E. coli and are well-folded. Furthermore, all the permutants assemble into 18-mer double rings of the same form as the wild-type protein. We characterized the thermodynamics of folding for each permutant by both guanidine denaturation and differential scanning calorimetry. We also examined the assembly of chaperonin rings into higher order structures that may be used as nanoscale templates. The results show that circular permutation can be used to tune the thermodynamic properties of a protein template as well as facilitating the fusion of peptides, binding proteins or enzymes onto nanostructured templates.

  10. The magic of universal quantum computing with permutations

    OpenAIRE

    Planat, Michel; Rukhsan-Ul-Haq

    2017-01-01

    The role of permutation gates for universal quantum computing is investigated. The \\lq magic' of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function and state-dependent contextuality (following many contributions on this subject). A first classification of main types of resulting magic states in low dimensions $d \\le 9$ is performed.

  11. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  12. Permutation orbifolds

    International Nuclear Information System (INIS)

    Bantay, P.

    2002-01-01

    A general theory of permutation orbifolds is developed for arbitrary twist groups. Explicit expressions for the number of primaries, the partition function, the genus one characters, the matrix elements of modular transformations and for fusion rule coefficients are presented, together with the relevant mathematical concepts, such as Λ-matrices and twisted dimensions. The arithmetic restrictions implied by the theory for the allowed modular representations in CFT are discussed. The simplest nonabelian example with twist group S 3 is described to illustrate the general theory

  13. A permutations representation that knows what " Eulerian" means

    Directory of Open Access Journals (Sweden)

    Roberto Mantaci

    2001-12-01

    Full Text Available Eulerian numbers (and ``Alternate Eulerian numbers'' are often interpreted as distributions of statistics defined over the Symmetric group. The main purpose of this paper is to define a way to represent permutations that provides some other combinatorial interpretations of these numbers. This representation uses a one-to-one correspondence between permutations and the so-called subexceedant functions.

  14. The Magic of Universal Quantum Computing with Permutations

    Directory of Open Access Journals (Sweden)

    Michel Planat

    2017-01-01

    Full Text Available The role of permutation gates for universal quantum computing is investigated. The “magic” of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function, and state-dependent contextuality (following many contributions on this subject. A first classification of a few types of resulting magic states in low dimensions d≤9 is performed.

  15. Young module multiplicities and classifying the indecomposable Young permutation modules

    OpenAIRE

    Gill, Christopher C.

    2012-01-01

    We study the multiplicities of Young modules as direct summands of permutation modules on cosets of Young subgroups. Such multiplicities have become known as the p-Kostka numbers. We classify the indecomposable Young permutation modules, and, applying the Brauer construction for p-permutation modules, we give some new reductions for p-Kostka numbers. In particular we prove that p-Kostka numbers are preserved under multiplying partitions by p, and strengthen a known reduction given by Henke, c...

  16. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    Science.gov (United States)

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  17. Permutation groups

    CERN Document Server

    Passman, Donald S

    2012-01-01

    This volume by a prominent authority on permutation groups consists of lecture notes that provide a self-contained account of distinct classification theorems. A ready source of frequently quoted but usually inaccessible theorems, it is ideally suited for professional group theorists as well as students with a solid background in modern algebra.The three-part treatment begins with an introductory chapter and advances to an economical development of the tools of basic group theory, including group extensions, transfer theorems, and group representations and characters. The final chapter feature

  18. Permutation importance: a corrected feature importance measure.

    Science.gov (United States)

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  19. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  20. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  1. A Fast Algorithm for Generating Permutation Distribution of Ranks in ...

    African Journals Online (AJOL)

    ... function of the distribution of the ranks. This further gives insight into the permutation distribution of a rank statistics. The algorithm is implemented with the aid of the computer algebra system Mathematica. Key words: Combinatorics, generating function, permutation distribution, rank statistics, partitions, computer algebra.

  2. Permutation groups and transformation semigroups : results and problems

    OpenAIRE

    Araujo, Joao; Cameron, Peter Jephson

    2015-01-01

    J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group $G$ and a single non-permutation $a$. Our particular concern is the influence that properties of $G$ (related to homogeneity, trans...

  3. Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach

    International Nuclear Information System (INIS)

    Rouzo, H.L.

    1979-01-01

    The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references

  4. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  5. The minimal extension of the Standard Model with S3 symmetry

    International Nuclear Information System (INIS)

    Lee, C.E.; Lin, C.; Yang, Y.W.

    1991-01-01

    In this paper the two Higgs-doublet extension of the standard electroweak model with S 3 symmetry is presented. The flavour changing neutral Higgs interaction are automatically absent. A permutation symmetry breaking scheme is discussed. The correction to the Bjorken's approximation and the CP-violation factor J are given within this scheme

  6. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    Science.gov (United States)

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  7. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    Science.gov (United States)

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  9. Determining the parity of a permutation using an experimental NMR qutrit

    International Nuclear Information System (INIS)

    Dogra, Shruti; Arvind,; Dorai, Kavita

    2014-01-01

    We present the NMR implementation of a recently proposed quantum algorithm to find the parity of a permutation. In the usual qubit model of quantum computation, it is widely believed that computational speedup requires the presence of entanglement and thus cannot be achieved by a single qubit. On the other hand, a qutrit is qualitatively more quantum than a qubit because of the existence of quantum contextuality and a single qutrit can be used for computing. We use the deuterium nucleus oriented in a liquid crystal as the experimental qutrit. This is the first experimental exploitation of a single qutrit to carry out a computational task. - Highlights: • NMR implementation of a quantum algorithm to determine the parity of a permutation. • Algorithm implemented on a single qutrit. • Computational speedup achieved without quantum entanglement. • Single qutrit shows quantum contextuality

  10. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  11. A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro

    2016-01-01

    Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation

  12. Secure physical layer using dynamic permutations in cognitive OFDMA systems

    DEFF Research Database (Denmark)

    Meucci, F.; Wardana, Satya Ardhy; Prasad, Neeli R.

    2009-01-01

    This paper proposes a novel lightweight mechanism for a secure Physical (PHY) layer in Cognitive Radio Network (CRN) using Orthogonal Frequency Division Multiplexing (OFDM). User's data symbols are mapped over the physical subcarriers with a permutation formula. The PHY layer is secured...... with a random and dynamic subcarrier permutation which is based on a single pre-shared information and depends on Dynamic Spectrum Access (DSA). The dynamic subcarrier permutation is varying over time, geographical location and environment status, resulting in a very robust protection that ensures...... confidentiality. The method is shown to be effective also for existing non-cognitive systems. The proposed mechanism is effective against eavesdropping even if the eavesdropper adopts a long-time patterns analysis, thus protecting cryptography techniques of higher layers. The correlation properties...

  13. NDPA: A generalized efficient parallel in-place N-Dimensional Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Elsayed Ali

    2015-09-01

    Full Text Available N-dimensional transpose/permutation is a very important operation in many large-scale data intensive and scientific applications. These applications include but not limited to oil industry i.e. seismic data processing, nuclear medicine, media production, digital signal processing and business intelligence. This paper proposes an efficient in-place N-dimensional permutation algorithm. The algorithm is based on a novel 3D transpose algorithm that was published recently. The proposed algorithm has been tested on 3D, 4D, 5D, 6D and 7D data sets as a proof of concept. This is the first contribution which is breaking the dimensions’ limitation of the base algorithm. The suggested algorithm exploits the idea of mixing both logical and physical permutations together. In the logical permutation, the address map is transposed for each data unit access. In the physical permutation, actual data elements are swapped. Both permutation levels exploit the fast on-chip memory bandwidth by transferring large amount of data and allowing for fine-grain SIMD (Single Instruction, Multiple Data operations. Thus, the performance is improved as evident from the experimental results section. The algorithm is implemented on NVidia GeForce GTS 250 GPU (Graphics Processing Unit containing 128 cores. The rapid increase in GPUs performance coupled with the recent and continuous improvements in its programmability proved that GPUs are the right choice for computationally demanding tasks. The use of GPUs is the second contribution which reflects how strongly they fit for high performance tasks. The third contribution is improving the proposed algorithm performance to its peak as discussed in the results section.

  14. Permutational symmetries for coincidence rates in multimode multiphotonic interferometry

    Science.gov (United States)

    Khalid, Abdullah; Spivak, Dylan; Sanders, Barry C.; de Guise, Hubert

    2018-06-01

    We obtain coincidence rates for passive optical interferometry by exploiting the permutational symmetries of partially distinguishable input photons, and our approach elucidates qualitative features of multiphoton coincidence landscapes. We treat the interferometer input as a product state of any number of photons in each input mode with photons distinguished by their arrival time. Detectors at the output of the interferometer count photons from each output mode over a long integration time. We generalize and prove the claim of Tillmann et al. [Phys. Rev. X 5, 041015 (2015), 10.1103/PhysRevX.5.041015] that coincidence rates can be elegantly expressed in terms of immanants. Immanants are functions of matrices that exhibit permutational symmetries and the immanants appearing in our coincidence-rate expressions share permutational symmetries with the input state. Our results are obtained by employing representation theory of the symmetric group to analyze systems of an arbitrary number of photons in arbitrarily sized interferometers.

  15. Permuted tRNA genes of Cyanidioschyzon merolae, the origin of the tRNA molecule and the root of the Eukarya domain.

    Science.gov (United States)

    Di Giulio, Massimo

    2008-08-07

    An evolutionary analysis is conducted on the permuted tRNA genes of Cyanidioschyzon merolae, in which the 5' half of the tRNA molecule is codified at the 3' end of the gene and its 3' half is codified at the 5' end. This analysis has shown that permuted genes cannot be considered as derived traits but seem to possess characteristics that suggest they are ancestral traits, i.e. they originated when tRNA molecule genes originated for the first time. In particular, if the hypothesis that permuted genes are a derived trait were true, then we should not have been able to observe that the most frequent class of permuted genes is that of the anticodon loop type, for the simple reason that this class would derive by random permutation from a class of non-permuted tRNA genes, which instead is the rarest. This would not explain the high frequency with which permuted tRNA genes with perfectly separate 5' and 3' halves were observed. Clearly the mechanism that produced this class of permuted genes would envisage the existence, in an advanced stage of evolution, of minigenes codifying for the 5' and 3' halves of tRNAs which were assembled in a permuted way at the origin of the tRNA molecule, thus producing a high frequency of permuted genes of the class here referred. Therefore, this evidence supports the hypothesis that the genes of the tRNA molecule were assembled by minigenes codifying for hairpin-like RNA molecules, as suggested by one model for the origin of tRNA [Di Giulio, M., 1992. On the origin of the transfer RNA molecule. J. Theor. Biol. 159, 199-214; Di Giulio, M., 1999. The non-monophyletic origin of tRNA molecule. J. Theor. Biol. 197, 403-414]. Moreover, the late assembly of the permuted genes of C. merolae, as well as their ancestrality, strengthens the hypothesis of the polyphyletic origins of these genes. Finally, on the basis of the uniqueness and the ancestrality of these permuted genes, I suggest that the root of the Eukarya domain is in the super

  16. A studentized permutation test for three-arm trials in the 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Konietschke, Frank; Munk, Axel; Friede, Tim

    2017-03-15

    The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Computing the Jones index of quadratic permutation endomorphisms of O2

    DEFF Research Database (Denmark)

    Szymanski, Wojciech; Conti, Roberto

    2009-01-01

    We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January......We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January...

  18. Sorting signed permutations by inversions in O(nlogn) time.

    Science.gov (United States)

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  19. The coupling analysis between stock market indices based on permutation measures

    Science.gov (United States)

    Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung

    2016-04-01

    Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  20. On permutation polynomials over finite fields: differences and iterations

    DEFF Research Database (Denmark)

    Anbar Meidl, Nurdagül; Odzak, Almasa; Patel, Vandita

    2017-01-01

    The Carlitz rank of a permutation polynomial f over a finite field Fq is a simple concept that was introduced in the last decade. Classifying permutations over Fq with respect to their Carlitz ranks has some advantages, for instance f with a given Carlitz rank can be approximated by a rational li...

  1. Ordered groups and infinite permutation groups

    CERN Document Server

    1996-01-01

    The subjects of ordered groups and of infinite permutation groups have long en­ joyed a symbiotic relationship. Although the two subjects come from very different sources, they have in certain ways come together, and each has derived considerable benefit from the other. My own personal contact with this interaction began in 1961. I had done Ph. D. work on sequence convergence in totally ordered groups under the direction of Paul Conrad. In the process, I had encountered "pseudo-convergent" sequences in an ordered group G, which are like Cauchy sequences, except that the differences be­ tween terms of large index approach not 0 but a convex subgroup G of G. If G is normal, then such sequences are conveniently described as Cauchy sequences in the quotient ordered group GIG. If G is not normal, of course GIG has no group structure, though it is still a totally ordered set. The best that can be said is that the elements of G permute GIG in an order-preserving fashion. In independent investigations around that t...

  2. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  3. N ecklaces~ Periodic Points and Permutation Representations

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 11. Necklaces, Periodic Points and Permutation Representations - Fermat's Little Theorem. Somnath Basu Anindita Bose Sumit Kumar Sinha Pankaj Vishe. General Article Volume 6 Issue 11 November 2001 pp 18-26 ...

  4. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    Science.gov (United States)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  5. Permutation entropy of fractional Brownian motion and fractional Gaussian noise

    International Nuclear Information System (INIS)

    Zunino, L.; Perez, D.G.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2008-01-01

    We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays

  6. Permutation symmetry and the origin of fermion mass hierarchy

    International Nuclear Information System (INIS)

    Babu, K.S.; Mohapatra, R.N.

    1990-01-01

    A realization of the ''flavor-democracy'' approach to quark and lepton masses is provided in the context of the standard model with a horizontal S 3 permutation symmetry. In this model, t and b quarks pick up mass at the tree level, c, s-quark and τ-lepton masses arise at the one-loop level, u, d, and μ masses at the two-loop level, and the electron mass at the three-loop level, thus reproducing the observed hierarchial structure without fine tuning of the Yukawa couplings. The pattern of quark mixing angles also emerges naturally, with V us ,V cb ∼O(ε), V ub ∼O(ε 2 ), where ε is a loop expansion parameter

  7. On Permuting Cut with Contraction

    OpenAIRE

    Borisavljevic, Mirjana; Dosen, Kosta; Petric, Zoran

    1999-01-01

    The paper presents a cut-elimination procedure for intuitionistic propositional logic in which cut is eliminated directly, without introducing the multiple-cut rule mix, and in which pushing cut above contraction is one of the reduction steps. The presentation of this procedure is preceded by an analysis of Gentzen's mix-elimination procedure, made in the perspective of permuting cut with contraction. It is also shown that in the absence of implication, pushing cut above contraction doesn't p...

  8. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  9. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  10. A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism

    Science.gov (United States)

    Ye, Ruisong

    2011-10-01

    This paper proposes a novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, in which permuting the positions of image pixels incorporates with changing the gray values of image pixels to confuse the relationship between cipher-image and plain-image. In the permutation process, a generalized Arnold map is utilized to generate one chaotic orbit used to get two index order sequences for the permutation of image pixel positions; in the diffusion process, a generalized Arnold map and a generalized Bernoulli shift map are employed to yield two pseudo-random gray value sequences for a two-way diffusion of gray values. The yielded gray value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results are carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack as well.

  11. Magma Proof of Strict Inequalities for Minimal Degrees of Finite Groups

    OpenAIRE

    Murray, Scott H.; Saunders, Neil

    2009-01-01

    The minimal faithful permutation degree of a finite group $G$, denote by $\\mu(G)$ is the least non-negative integer $n$ such that $G$ embeds inside the symmetric group $\\Sym(n)$. In this paper, we outline a Magma proof that 10 is the smallest degree for which there are groups $G$ and $H$ such that $\\mu(G \\times H) < \\mu(G)+ \\mu(H)$.

  12. EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Junjie; Wu, Xin; Huang, Guoqing [Department of Physics and Institute of Astronomy, Nanchang University, Nanchang 330031 (China); Liu, Fuyao, E-mail: xwu@ncu.edu.cn [School of Fundamental Studies, Shanghai University of Engineering Science, Shanghai 201620 (China)

    2017-01-01

    We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step than the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.

  13. Weighted multiscale Rényi permutation entropy of nonlinear time series

    Science.gov (United States)

    Chen, Shijian; Shang, Pengjian; Wu, Yue

    2018-04-01

    In this paper, based on Rényi permutation entropy (RPE), which has been recently suggested as a relative measure of complexity in nonlinear systems, we propose multiscale Rényi permutation entropy (MRPE) and weighted multiscale Rényi permutation entropy (WMRPE) to quantify the complexity of nonlinear time series over multiple time scales. First, we apply MPRE and WMPRE to the synthetic data and make a comparison of modified methods and RPE. Meanwhile, the influence of the change of parameters is discussed. Besides, we interpret the necessity of considering not only multiscale but also weight by taking the amplitude into account. Then MRPE and WMRPE methods are employed to the closing prices of financial stock markets from different areas. By observing the curves of WMRPE and analyzing the common statistics, stock markets are divided into 4 groups: (1) DJI, S&P500, and HSI, (2) NASDAQ and FTSE100, (3) DAX40 and CAC40, and (4) ShangZheng and ShenCheng. Results show that the standard deviations of weighted methods are smaller, showing WMRPE is able to ensure the results more robust. Besides, WMPRE can provide abundant dynamical properties of complex systems, and demonstrate the intrinsic mechanism.

  14. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Kaixuan, E-mail: kaixuanxubjtu@yeah.net; Wang, Jun

    2017-02-26

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  15. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    International Nuclear Information System (INIS)

    Xu, Kaixuan; Wang, Jun

    2017-01-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  16. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  17. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  18. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  19. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    Science.gov (United States)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  20. Permutation Entropy: New Ideas and Challenges

    Directory of Open Access Journals (Sweden)

    Karsten Keller

    2017-03-01

    Full Text Available Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data.

  1. EPEPT: A web service for enhanced P-value estimation in permutation tests

    Directory of Open Access Journals (Sweden)

    Knijnenburg Theo A

    2011-10-01

    Full Text Available Abstract Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values 1. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/

  2. A Weak Quantum Blind Signature with Entanglement Permutation

    Science.gov (United States)

    Lou, Xiaoping; Chen, Zhigang; Guo, Ying

    2015-09-01

    Motivated by the permutation encryption algorithm, a weak quantum blind signature (QBS) scheme is proposed. It involves three participants, including the sender Alice, the signatory Bob and the trusted entity Charlie, in four phases, i.e., initializing phase, blinding phase, signing phase and verifying phase. In a small-scale quantum computation network, Alice blinds the message based on a quantum entanglement permutation encryption algorithm that embraces the chaotic position string. Bob signs the blinded message with private parameters shared beforehand while Charlie verifies the signature's validity and recovers the original message. Analysis shows that the proposed scheme achieves the secure blindness for the signer and traceability for the message owner with the aid of the authentic arbitrator who plays a crucial role when a dispute arises. In addition, the signature can neither be forged nor disavowed by the malicious attackers. It has a wide application to E-voting and E-payment system, etc.

  3. MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS

    Directory of Open Access Journals (Sweden)

    Robinson S.V. Hoto

    Full Text Available ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor, refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment WxDev C++. The results obtained by simulations improve the one used by the company.

  4. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  5. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  6. SCOPES: steganography with compression using permutation search

    Science.gov (United States)

    Boorboor, Sahar; Zolfaghari, Behrouz; Mozafari, Saadat Pour

    2011-10-01

    LSB (Least Significant Bit) is a widely used method for image steganography, which hides the secret message as a bit stream in LSBs of pixel bytes in the cover image. This paper proposes a variant of LSB named SCOPES that encodes and compresses the secret message while being hidden through storing addresses instead of message bytes. Reducing the length of the stored message improves the storage capacity and makes the stego image visually less suspicious to the third party. The main idea behind the SCOPES approach is dividing the message into 3-character segments, seeking each segment in the cover image and storing the address of the position containing the segment instead of the segment itself. In this approach, every permutation of the 3 bytes (if found) can be stored along with some extra bits indicating the permutation. In some rare cases the segment may not be found in the image and this can cause the message to be expanded by some overhead bits2 instead of being compressed. But experimental results show that SCOPES performs overlay better than traditional LSB even in the worst cases.

  7. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  8. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  9. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  10. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  11. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  12. A Hybrid ACO Approach to the Matrix Bandwidth Minimization Problem

    Science.gov (United States)

    Pintea, Camelia-M.; Crişan, Gloria-Cerasela; Chira, Camelia

    The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated {NP}-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.

  13. Development of isothermal-isobaric replica-permutation method for molecular dynamics and Monte Carlo simulations and its application to reveal temperature and pressure dependence of folded, misfolded, and unfolded states of chignolin

    Science.gov (United States)

    Yamauchi, Masataka; Okumura, Hisashi

    2017-11-01

    We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.

  14. Minimal model holography

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Gopakumar, Rajesh

    2013-01-01

    We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

  15. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  16. Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows

    DEFF Research Database (Denmark)

    Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl

    2015-01-01

    Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted...... by the operation resembling ShiftRows has never been studied in depth. Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear...... normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry...

  17. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  18. Infinity-Norm Permutation Covering Codes from Cyclic Groups

    OpenAIRE

    Karni, Ronen; Schwartz, Moshe

    2017-01-01

    We study covering codes of permutations with the $\\ell_\\infty$-metric. We provide a general code construction, which uses smaller building-block codes. We study cyclic transitive groups as building blocks, determining their exact covering radius, and showing linear-time algorithms for finding a covering codeword. We also bound the covering radius of relabeled cyclic transitive groups under conjugation.

  19. Search for Minimal Standard Model and Minimal Supersymmetric Model Higgs Bosons in e+ e- Collisions with the OPAL detector at LEP

    International Nuclear Information System (INIS)

    Ganel, Ofer

    1993-06-01

    When LEP machine was turned on in August 1989, a new era had opened. For the first time, direct, model-independent searches for Higgs boson could be carried out. The Minimal Standard Model Higgs boson is expected to be produced in e + e - collisions via the H o Z o . The Minimal Supersymmetric Model Higgs boson are expected to be produced in the analogous e + e - -> h o Z o process or in pairs via the process e + e - -> h o A o . In this thesis we describe the search for Higgs bosons within the framework of the Minimal Standard Model and the Minimal Supersymmetric Model, using the data accumulated by the OPAL detector at LEP in the 1989, 1990, 1991 and part of the 1992 running periods at and around the Z o pole. An MInimal Supersymmetric Model Higgs boson generator is described as well as its use in several different searches. As a result of this work, the Minimal Standard Model Higgs boson mass is bounded from below by 54.2 GeV/c 2 at 95% C.L. This is, at present, the highest such bound. A novel method of overcoming the m τ and m s dependence of Minimal Supersymmetric Higgs boson production and decay introduced by one-loop radiative corrections is used to obtain model-independent exclusion. The thesis describes also an algorithm for off line identification of calorimeter noise in the OPAL detector. (author)

  20. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  1. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  2. Optimization and experimental realization of the quantum permutation algorithm

    Science.gov (United States)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  3. Sampling solution traces for the problem of sorting permutations by signed reversals

    Science.gov (United States)

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results

  4. Permutation entropy and statistical complexity in characterising low-aspect-ratio reversed-field pinch plasma

    International Nuclear Information System (INIS)

    Onchi, T; Fujisawa, A; Sanpei, A; Himura, H; Masamune, S

    2017-01-01

    Permutation entropy and statistical complexity are measures for complex time series. The Bandt–Pompe methodology evaluates probability distribution using permutation. The method is robust and effective to quantify information of time series data. Statistical complexity is the product of Jensen–Shannon divergence and permutation entropy. These physical parameters are introduced to analyse time series of emission and magnetic fluctuations in low-aspect-ratio reversed-field pinch (RFP) plasma. The observed time-series data aggregates in a region of the plane, the so-called C – H plane, determined by entropy versus complexity. The C – H plane is a representation space used for distinguishing periodic, chaos, stochastic and noisy processes of time series data. The characteristics of the emissions and magnetic fluctuation change under different RFP-plasma conditions. The statistical complexities of soft x-ray emissions and magnetic fluctuations depend on the relationships between reversal and pinch parameters. (paper)

  5. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  6. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  7. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  8. Discrete Chebyshev nets and a universal permutability theorem

    International Nuclear Information System (INIS)

    Schief, W K

    2007-01-01

    The Pohlmeyer-Lund-Regge system which was set down independently in the contexts of Lagrangian field theories and the relativistic motion of a string and which played a key role in the development of a geometric interpretation of soliton theory is known to appear in a variety of important guises such as the vectorial Lund-Regge equation, the O(4) nonlinear σ-model and the SU(2) chiral model. Here, it is demonstrated that these avatars may be discretized in such a manner that both integrability and equivalence are preserved. The corresponding discretization procedure is geometric and algebraic in nature and based on discrete Chebyshev nets and generalized discrete Lelieuvre formulae. In connection with the derivation of associated Baecklund transformations, it is shown that a generalized discrete Lund-Regge equation may be interpreted as a universal permutability theorem for integrable equations which admit commuting matrix Darboux transformations acting on su(2) linear representations. Three-dimensional coordinate systems and lattices of 'Lund-Regge' type related to particular continuous and discrete Zakharov-Manakov systems are obtained as a by-product of this analysis

  9. Revisiting the NEH algorithm- the power of job insertion technique for optimizing the makespan in permutation flow shop scheduling

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2016-04-01

    Full Text Available Permutation flow shop scheduling problems have been an interesting area of research for over six decades. Out of the several parameters, minimization of makespan has been studied much over the years. The problems are widely regarded as NP-Complete if the number of machines is more than three. As the computation time grows exponentially with respect to the problem size, heuristics and meta-heuristics have been proposed by many authors that give reasonably accurate and acceptable results. The NEH algorithm proposed in 1983 is still considered as one of the best simple, constructive heuristics for the minimization of makespan. This paper analyses the powerful job insertion technique used by NEH algorithm and proposes seven new variants, the complexity level remains same. 120 numbers of problem instances proposed by Taillard have been used for the purpose of validating the algorithms. Out of the seven, three produce better results than the original NEH algorithm.

  10. Symbolic Detection of Permutation and Parity Symmetries of Evolution Equations

    KAUST Repository

    Alghamdi, Moataz

    2017-06-18

    We introduce a symbolic computational approach to detecting all permutation and parity symmetries in any general evolution equation, and to generating associated invariant polynomials, from given monomials, under the action of these symmetries. Traditionally, discrete point symmetries of differential equations are systemically found by solving complicated nonlinear systems of partial differential equations; in the presence of Lie symmetries, the process can be simplified further. Here, we show how to find parity- and permutation-type discrete symmetries purely based on algebraic calculations. Furthermore, we show that such symmetries always form groups, thereby allowing for the generation of new group-invariant conserved quantities from known conserved quantities. This work also contains an implementation of the said results in Mathematica. In addition, it includes, as a motivation for this work, an investigation of the connection between variational symmetries, described by local Lie groups, and conserved quantities in Hamiltonian systems.

  11. Magic informationally complete POVMs with permutations

    Science.gov (United States)

    Planat, Michel; Gedik, Zafer

    2017-09-01

    Eigenstates of permutation gates are either stabilizer states (for gates in the Pauli group) or magic states, thus allowing universal quantum computation (Planat, Rukhsan-Ul-Haq 2017 Adv. Math. Phys. 2017, 5287862 (doi:10.1155/2017/5287862)). We show in this paper that a subset of such magic states, when acting on the generalized Pauli group, define (asymmetric) informationally complete POVMs. Such informationally complete POVMs, investigated in dimensions 2-12, exhibit simple finite geometries in their projector products and, for dimensions 4 and 8 and 9, relate to two-qubit, three-qubit and two-qutrit contextuality.

  12. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    Science.gov (United States)

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational

  13. Quantile-based permutation thresholds for quantitative trait loci hotspots.

    Science.gov (United States)

    Neto, Elias Chaibub; Keller, Mark P; Broman, Andrew F; Attie, Alan D; Jansen, Ritsert C; Broman, Karl W; Yandell, Brian S

    2012-08-01

    Quantitative trait loci (QTL) hotspots (genomic locations affecting many traits) are a common feature in genetical genomics studies and are biologically interesting since they may harbor critical regulators. Therefore, statistical procedures to assess the significance of hotspots are of key importance. One approach, randomly allocating observed QTL across the genomic locations separately by trait, implicitly assumes all traits are uncorrelated. Recently, an empirical test for QTL hotspots was proposed on the basis of the number of traits that exceed a predetermined LOD value, such as the standard permutation LOD threshold. The permutation null distribution of the maximum number of traits across all genomic locations preserves the correlation structure among the phenotypes, avoiding the detection of spurious hotspots due to nongenetic correlation induced by uncontrolled environmental factors and unmeasured variables. However, by considering only the number of traits above a threshold, without accounting for the magnitude of the LOD scores, relevant information is lost. In particular, biologically interesting hotspots composed of a moderate to small number of traits with strong LOD scores may be neglected as nonsignificant. In this article we propose a quantile-based permutation approach that simultaneously accounts for the number and the LOD scores of traits within the hotspots. By considering a sliding scale of mapping thresholds, our method can assess the statistical significance of both small and large hotspots. Although the proposed approach can be applied to any type of heritable high-volume "omic" data set, we restrict our attention to expression (e)QTL analysis. We assess and compare the performances of these three methods in simulations and we illustrate how our approach can effectively assess the significance of moderate and small hotspots with strong LOD scores in a yeast expression data set.

  14. Rank-based permutation approaches for non-parametric factorial designs.

    Science.gov (United States)

    Umlauft, Maria; Konietschke, Frank; Pauly, Markus

    2017-11-01

    Inference methods for null hypotheses formulated in terms of distribution functions in general non-parametric factorial designs are studied. The methods can be applied to continuous, ordinal or even ordered categorical data in a unified way, and are based only on ranks. In this set-up Wald-type statistics and ANOVA-type statistics are the current state of the art. The first method is asymptotically exact but a rather liberal statistical testing procedure for small to moderate sample size, while the latter is only an approximation which does not possess the correct asymptotic α level under the null. To bridge these gaps, a novel permutation approach is proposed which can be seen as a flexible generalization of the Kruskal-Wallis test to all kinds of factorial designs with independent observations. It is proven that the permutation principle is asymptotically correct while keeping its finite exactness property when data are exchangeable. The results of extensive simulation studies foster these theoretical findings. A real data set exemplifies its applicability. © 2017 The British Psychological Society.

  15. Rolling Bearing Fault Diagnosis Based on ELCD Permutation Entropy and RVM

    Directory of Open Access Journals (Sweden)

    Jiang Xingmeng

    2016-01-01

    Full Text Available Aiming at the nonstationary characteristic of a gear fault vibration signal, a recognition method based on permutation entropy of ensemble local characteristic-scale decomposition (ELCD and relevance vector machine (RVM is proposed. First, the vibration signal was decomposed by ELCD; then a series of intrinsic scale components (ISCs were obtained. Second, according to the kurtosis of ISCs, principal ISCs were selected and then the permutation entropy of principal ISCs was calculated and they were combined into a feature vector. Finally, the feature vectors were input in RVM classifier to train and test and identify the type of rolling bearing faults. Experimental results show that this method can effectively diagnose four kinds of working condition, and the effect is better than local characteristic-scale decomposition (LCD method.

  16. Tolerance of a knotted near infrared fluorescent protein to random circular permutation

    Science.gov (United States)

    Pandey, Naresh; Kuypers, Brianna E.; Nassif, Barbara; Thomas, Emily E.; Alnahhas, Razan N.; Segatori, Laura; Silberg, Jonathan J.

    2016-01-01

    Bacteriophytochrome photoreceptors (BphP) are knotted proteins that have been developed as near-infrared fluorescent protein (iRFP) reporters of gene expression. To explore how rearrangements in the peptides that interlace into the knot within the BphP photosensory core affect folding, we subjected iRFP to random circular permutation using an improved transposase mutagenesis strategy and screened for variants that fluoresce. We identified twenty seven circularly permuted iRFP that display biliverdin-dependent fluorescence in Escherichia coli. The variants with the brightest whole cell fluorescence initiated translation at residues near the domain linker and knot tails, although fluorescent variants were discovered that initiated translation within the PAS and GAF domains. Circularly permuted iRFP retained sufficient cofactor affinity to fluoresce in tissue culture without the addition of biliverdin, and one variant displayed enhanced fluorescence when expressed in bacteria and tissue culture. This variant displayed a similar quantum yield as iRFP, but exhibited increased resistance to chemical denaturation, suggesting that the observed signal increase arose from more efficient protein maturation. These results show how the contact order of a knotted BphP can be altered without disrupting chromophore binding and fluorescence, an important step towards the creation of near-infrared biosensors with expanded chemical-sensing functions for in vivo imaging. PMID:27304983

  17. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  19. Perturbed Yukawa textures in the minimal seesaw model

    Energy Technology Data Exchange (ETDEWEB)

    Rink, Thomas; Schmitz, Kai [Max Planck Institute for Nuclear Physics (MPIK),69117 Heidelberg (Germany)

    2017-03-29

    We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP-violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O(10 %). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.

  20. Permutation Entropy for Random Binary Sequences

    Directory of Open Access Journals (Sweden)

    Lingfeng Liu

    2015-12-01

    Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.

  1. Permutation 2-groups I: structure and splitness

    OpenAIRE

    Elgueta, Josep

    2013-01-01

    By a 2-group we mean a groupoid equipped with a weakened group structure. It is called split when it is equivalent to the semidirect product of a discrete 2-group and a one-object 2-group. By a permutation 2-group we mean the 2-group $\\mathbb{S}ym(\\mathcal{G})$ of self-equivalences of a groupoid $\\mathcal{G}$ and natural isomorphisms between them, with the product given by composition of self-equivalences. These generalize the symmetric groups $\\mathsf{S}_n$, $n\\geq 1$, obtained when $\\mathca...

  2. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  3. The dispersionless Lax equations and topological minimal models

    International Nuclear Information System (INIS)

    Krichever, I.

    1992-01-01

    It is shown that perturbed rings of the primary chiral fields of the topological minimal models coincide with some particular solutions of the dispersionless Lax equations. The exact formulae for the tree level partition functions, of A n topological minimal models are found. The Virasoro constraints for the analogue of the τ-function of the dispersionless Lax equation corresponding to these models are proved. (orig.)

  4. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  5. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    International Nuclear Information System (INIS)

    De Vogelaere, Frederick; Vingerhoets, Guy; Santens, Patrick; Boon, Paul; Achten, Erik

    2010-01-01

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  6. Hippocampal activation during face-name associative memory encoding: blocked versus permuted design

    Energy Technology Data Exchange (ETDEWEB)

    De Vogelaere, Frederick; Vingerhoets, Guy [Ghent University, Laboratory for Neuropsychology, Department of Neurology, Ghent (Belgium); Santens, Patrick; Boon, Paul [Ghent University Hospital, Department of Neurology, Ghent (Belgium); Achten, Erik [Ghent University Hospital, Department of Radiology, Ghent (Belgium)

    2010-01-15

    The contribution of the hippocampal subregions to episodic memory through the formation of new associations between previously unrelated items such as faces and names is established but remains under discussion. Block design studies in this area of research generally tend to show posterior hippocampal activation during encoding of novel associational material while event-related studies emphasize anterior hippocampal involvement. We used functional magnetic resonance imaging to assess the involvement of anterior and posterior hippocampus in the encoding of novel associational material compared to the viewing of previously seen associational material. We used two different experimental designs, a block design and a permuted block design, and applied it to the same associative memory task to perform valid statistical comparisons. Our results indicate that the permuted design was able to capture more anterior hippocampal activation compared to the block design, which emphasized more posterior hippocampal involvement. These differences were further investigated and attributed to a combination of the polymodal stimuli we used and the experimental design. Activation patterns during encoding in both designs occurred along the entire longitudinal axis of the hippocampus, but with different centers of gravity. The maximal activated voxel in the block design was situated in the posterior half of the hippocampus while in the permuted design this was located in the anterior half. (orig.)

  7. Tolerance of a Knotted Near-Infrared Fluorescent Protein to Random Circular Permutation.

    Science.gov (United States)

    Pandey, Naresh; Kuypers, Brianna E; Nassif, Barbara; Thomas, Emily E; Alnahhas, Razan N; Segatori, Laura; Silberg, Jonathan J

    2016-07-12

    Bacteriophytochrome photoreceptors (BphP) are knotted proteins that have been developed as near-infrared fluorescent protein (iRFP) reporters of gene expression. To explore how rearrangements in the peptides that interlace into the knot within the BphP photosensory core affect folding, we subjected iRFPs to random circular permutation using an improved transposase mutagenesis strategy and screened for variants that fluoresce. We identified 27 circularly permuted iRFPs that display biliverdin-dependent fluorescence in Escherichia coli. The variants with the brightest whole cell fluorescence initiated translation at residues near the domain linker and knot tails, although fluorescent variants that initiated translation within the PAS and GAF domains were discovered. Circularly permuted iRFPs retained sufficient cofactor affinity to fluoresce in tissue culture without the addition of biliverdin, and one variant displayed enhanced fluorescence when expressed in bacteria and tissue culture. This variant displayed a quantum yield similar to that of iRFPs but exhibited increased resistance to chemical denaturation, suggesting that the observed increase in the magnitude of the signal arose from more efficient protein maturation. These results show how the contact order of a knotted BphP can be altered without disrupting chromophore binding and fluorescence, an important step toward the creation of near-infrared biosensors with expanded chemical sensing functions for in vivo imaging.

  8. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  9. Diversification of Protein Cage Structure Using Circularly Permuted Subunits.

    Science.gov (United States)

    Azuma, Yusuke; Herger, Michael; Hilvert, Donald

    2018-01-17

    Self-assembling protein cages are useful as nanoscale molecular containers for diverse applications in biotechnology and medicine. To expand the utility of such systems, there is considerable interest in customizing the structures of natural cage-forming proteins and designing new ones. Here we report that a circularly permuted variant of lumazine synthase, a cage-forming enzyme from Aquifex aeolicus (AaLS) affords versatile building blocks for the construction of nanocompartments that can be easily produced, tailored, and diversified. The topologically altered protein, cpAaLS, self-assembles into spherical and tubular cage structures with morphologies that can be controlled by the length of the linker connecting the native termini. Moreover, cpAaLS proteins integrate into wild-type and other engineered AaLS assemblies by coproduction in Escherichia coli to form patchwork cages. This coassembly strategy enables encapsulation of guest proteins in the lumen, modification of the exterior through genetic fusion, and tuning of the size and electrostatics of the compartments. This addition to the family of AaLS cages broadens the scope of this system for further applications and highlights the utility of circular permutation as a potentially general strategy for tailoring the properties of cage-forming proteins.

  10. Transformative decision rules, permutability, and non-sequential framing of decision problems

    NARCIS (Netherlands)

    Peterson, M.B.

    2004-01-01

    The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A

  11. Multiple comparisons permutation test for image based data mining in radiotherapy

    NARCIS (Netherlands)

    Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel

    2013-01-01

    : Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling

  12. The minimal curvaton-Higgs model

    International Nuclear Information System (INIS)

    Enqvist, Kari; Lerner, Rose N.; Helsinki Univ. and Helsinki Institute of Physics; Takahashi, Tomo

    2013-10-01

    We present the first full study of the minimal curvaton-Higgs (MCH) model, which is a minimal interpretation of the curvaton scenario with one real scalar coupled to the standard model Higgs boson. The standard model coupling allows the dynamics of the model to be determined in detail, including effects from the thermal background and from radiative corrections to the potential. The relevant mechanisms for curvaton decay are incomplete non-perturbative decay (delayed by thermal blocking), followed by decay via a dimension-5 non-renormalisable operator. To avoid spoiling the predictions of big bang nucleosynthesis, we find the ''bare'' curvaton mass to be m σ ≥8 x 10 4 GeV. To match observational data from Planck there is an upper limit on the curvaton-higgs coupling g, between 10 -3 and 10 -2 , depending on the mass. This is due to interactions with the thermal background. We find that typically non-Gaussianities are small but that if f NL is observed in the near future then m σ 9 GeV, depending on Hubble scale during inflation. In a thermal dark matter model, the lower bound on m σ can increase substantially. The parameter space may also be affected once the baryogenesis mechanism is specified.

  13. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  14. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    ... in a combinatorial sense the distribution of the ranks is obtained via its generating function. The formulas are defined recursively to speed up computations using the computer algebra system Mathematica. Key words: Partitions, generating functions, combinatorics, permutation test, exact tests, computer algebra, k-sample, ...

  15. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  16. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  17. Fusion algebras of logarithmic minimal models

    International Nuclear Information System (INIS)

    Rasmussen, Joergen; Pearce, Paul A

    2007-01-01

    We present explicit conjectures for the chiral fusion algebras of the logarithmic minimal models LM(p,p') considering Virasoro representations with no enlarged or extended symmetry algebra. The generators of fusion are countably infinite in number but the ensuing fusion rules are quasi-rational in the sense that the fusion of a finite number of representations decomposes into a finite direct sum of representations. The fusion rules are commutative, associative and exhibit an sl(2) structure but require so-called Kac representations which are typically reducible yet indecomposable representations of rank 1. In particular, the identity of the fundamental fusion algebra p ≠ 1 is a reducible yet indecomposable Kac representation of rank 1. We make detailed comparisons of our fusion rules with the results of Gaberdiel and Kausch for p = 1 and with Eberle and Flohr for (p, p') = (2, 5) corresponding to the logarithmic Yang-Lee model. In the latter case, we confirm the appearance of indecomposable representations of rank 3. We also find that closure of a fundamental fusion algebra is achieved without the introduction of indecomposable representations of rank higher than 3. The conjectured fusion rules are supported, within our lattice approach, by extensive numerical studies of the associated integrable lattice models. Details of our lattice findings and numerical results will be presented elsewhere. The agreement of our fusion rules with the previous fusion rules lends considerable support for the identification of the logarithmic minimal models LM(p,p') with the augmented c p,p' (minimal) models defined algebraically

  18. The minimal curvaton-Higgs model

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Kari [Helsinki Univ. and Helsinki Institute of Physics (Finland). Physics Dept.; Lerner, Rose N. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Helsinki Univ. and Helsinki Institute of Physics (Finland). Physics Dept.; Takahashi, Tomo [Saga Univ. (Japan). Dept. of Physics

    2013-10-15

    We present the first full study of the minimal curvaton-Higgs (MCH) model, which is a minimal interpretation of the curvaton scenario with one real scalar coupled to the standard model Higgs boson. The standard model coupling allows the dynamics of the model to be determined in detail, including effects from the thermal background and from radiative corrections to the potential. The relevant mechanisms for curvaton decay are incomplete non-perturbative decay (delayed by thermal blocking), followed by decay via a dimension-5 non-renormalisable operator. To avoid spoiling the predictions of big bang nucleosynthesis, we find the ''bare'' curvaton mass to be m{sub {sigma}}{>=}8 x 10{sup 4} GeV. To match observational data from Planck there is an upper limit on the curvaton-higgs coupling g, between 10{sup -3} and 10{sup -2}, depending on the mass. This is due to interactions with the thermal background. We find that typically non-Gaussianities are small but that if f{sub NL} is observed in the near future then m{sub {sigma}}model, the lower bound on m{sub {sigma}} can increase substantially. The parameter space may also be affected once the baryogenesis mechanism is specified.

  19. Null-polygonal minimal surfaces in AdS4 from perturbed W minimal models

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Ito, Katsushi; Satoh, Yuji

    2012-11-01

    We study the null-polygonal minimal surfaces in AdS 4 , which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4) 4 /U(1) n-5 generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS 3 case.

  20. Strong Sector in non-minimal SUSY model

    Directory of Open Access Journals (Sweden)

    Costantini Antonio

    2016-01-01

    Full Text Available We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.

  1. A permutation information theory tour through different interest rate maturities: the Libor case.

    Science.gov (United States)

    Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A

    2015-12-13

    This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).

  2. Information transmission and signal permutation in active flow networks

    Science.gov (United States)

    Woodhouse, Francis G.; Fawcett, Joanna B.; Dunkel, Jörn

    2018-03-01

    Recent experiments show that both natural and artificial microswimmers in narrow channel-like geometries will self-organise to form steady, directed flows. This suggests that networks of flowing active matter could function as novel autonomous microfluidic devices. However, little is known about how information propagates through these far-from-equilibrium systems. Through a mathematical analogy with spin-ice vertex models, we investigate here the input–output characteristics of generic incompressible active flow networks (AFNs). Our analysis shows that information transport through an AFN is inherently different from conventional pressure or voltage driven networks. Active flows on hexagonal arrays preserve input information over longer distances than their passive counterparts and are highly sensitive to bulk topological defects, whose presence can be inferred from marginal input–output distributions alone. This sensitivity further allows controlled permutations on parallel inputs, revealing an unexpected link between active matter and group theory that can guide new microfluidic mixing strategies facilitated by active matter and aid the design of generic autonomous information transport networks.

  3. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  4. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS

    Directory of Open Access Journals (Sweden)

    Moshen Kuai

    2018-03-01

    Full Text Available For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN Adaptive Neuro-fuzzy Inference System (ANFIS in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  5. Research of Planetary Gear Fault Diagnosis Based on Permutation Entropy of CEEMDAN and ANFIS.

    Science.gov (United States)

    Kuai, Moshen; Cheng, Gang; Pang, Yusong; Li, Yong

    2018-03-05

    For planetary gear has the characteristics of small volume, light weight and large transmission ratio, it is widely used in high speed and high power mechanical system. Poor working conditions result in frequent failures of planetary gear. A method is proposed for diagnosing faults in planetary gear based on permutation entropy of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) Adaptive Neuro-fuzzy Inference System (ANFIS) in this paper. The original signal is decomposed into 6 intrinsic mode functions (IMF) and residual components by CEEMDAN. Since the IMF contains the main characteristic information of planetary gear faults, time complexity of IMFs are reflected by permutation entropies to quantify the fault features. The permutation entropies of each IMF component are defined as the input of ANFIS, and its parameters and membership functions are adaptively adjusted according to training samples. Finally, the fuzzy inference rules are determined, and the optimal ANFIS is obtained. The overall recognition rate of the test sample used for ANFIS is 90%, and the recognition rate of gear with one missing tooth is relatively high. The recognition rates of different fault gears based on the method can also achieve better results. Therefore, the proposed method can be applied to planetary gear fault diagnosis effectively.

  6. Matrix factorizations, minimal models and Massey products

    International Nuclear Information System (INIS)

    Knapp, Johanna; Omer, Harun

    2006-01-01

    We present a method to compute the full non-linear deformations of matrix factorizations for ADE minimal models. This method is based on the calculation of higher products in the cohomology, called Massey products. The algorithm yields a polynomial ring whose vanishing relations encode the obstructions of the deformations of the D-branes characterized by these matrix factorizations. This coincides with the critical locus of the effective superpotential which can be computed by integrating these relations. Our results for the effective superpotential are in agreement with those obtained from solving the A-infinity relations. We point out a relation to the superpotentials of Kazama-Suzuki models. We will illustrate our findings by various examples, putting emphasis on the E 6 minimal model

  7. Generalized permutation symmetry and the flavour problem in SU(2)sub(L)xU(1)

    International Nuclear Information System (INIS)

    Ecker, G.

    1984-01-01

    A generalized permutation group is introduced as a possible horizontal symmetry for SU(2)sub(L)xU(1) gauge theories. It leads to the unique two generation quark mass matrices with a correct prediction for the Cabibbo angle. For three generations the model exhibits spontaneous CP violation, correlates the Kobayashi-Maskawa mixing parameters s 1 and s 3 and predicts an upper bound for the running top quark mass of approximately 45 GeV. The hierarchy of generations is due to a hierarchy of vacuum expectation values rather than of Yukawa coupling constants. (orig.)

  8. Information sets as permutation cycles for quadratic residue codes

    Directory of Open Access Journals (Sweden)

    Richard A. Jenson

    1982-01-01

    Full Text Available The two cases p=7 and p=23 are the only known cases where the automorphism group of the [p+1,   (p+1/2] extended binary quadratic residue code, O(p, properly contains PSL(2,p. These codes have some of their information sets represented as permutation cycles from Aut(Q(p. Analysis proves that all information sets of Q(7 are so represented but those of Q(23 are not.

  9. Structural analysis of papain-like NlpC/P60 superfamily enzymes with a circularly permuted topology reveals potential lipid binding sites.

    Directory of Open Access Journals (Sweden)

    Qingping Xu

    Full Text Available NlpC/P60 superfamily papain-like enzymes play important roles in all kingdoms of life. Two members of this superfamily, LRAT-like and YaeF/YiiX-like families, were predicted to contain a catalytic domain that is circularly permuted such that the catalytic cysteine is located near the C-terminus, instead of at the N-terminus. These permuted enzymes are widespread in virus, pathogenic bacteria, and eukaryotes. We determined the crystal structure of a member of the YaeF/YiiX-like family from Bacillus cereus in complex with lysine. The structure, which adopts a ligand-induced, "closed" conformation, confirms the circular permutation of catalytic residues. A comparative analysis of other related protein structures within the NlpC/P60 superfamily is presented. Permutated NlpC/P60 enzymes contain a similar conserved core and arrangement of catalytic residues, including a Cys/His-containing triad and an additional conserved tyrosine. More surprisingly, permuted enzymes have a hydrophobic S1 binding pocket that is distinct from previously characterized enzymes in the family, indicative of novel substrate specificity. Further analysis of a structural homolog, YiiX (PDB 2if6 identified a fatty acid in the conserved hydrophobic pocket, thus providing additional insights into possible function of these novel enzymes.

  10. Predecessor and permutation existence problems for sequential dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Rosenkrantz, D. J. (Daniel J.); Stearns, R. E. (Richard E.)

    2002-01-01

    A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was introduced in BMR99, BR991 as a formal model for analyzing simulation systems. An SDS S is a triple (G, F,n ),w here (i) G(V,E ) is an undirected graph with n nodes with each node having a state, (ii) F = (fi, fi, . . ., fn), with fi denoting a function associated with node ui E V and (iii) A is a permutation of (or total order on) the nodes in V, A configuration of an SDS is an n-vector ( b l, bz, . . ., bn), where bi is the value of the state of node vi. A single SDS transition from one configuration to another is obtained by updating the states of the nodes by evaluating the function associated with each of them in the order given by n. Here, we address the complexity of two basic problems and their generalizations for SDSs. Given an SDS S and a configuration C, the PREDECESSOR EXISTENCE (or PRE) problem is to determine whether there is a configuration C' such that S has a transition from C' to C. (If C has no predecessor, C is known as a garden of Eden configuration.) Our results provide separations between efficiently solvable and computationally intractable instances of the PRE problem. For example, we show that the PRE problem can be solved efficiently for SDSs with Boolean state values when the node functions are symmetric and the underlying graph is of bounded treewidth. In contrast, we show that allowing just one non-symmetric node function renders the problem NP-complete even when the underlying graph is a tree (which has a treewidth of 1). We also show that the PRE problem is efficiently solvable for SDSs whose state values are from a field and whose node functions are linear. Some of the polynomial algorithms also extend to the case where we want to find an ancestor configuration that precedes a given configuration by a logarithmic number of steps. Our results extend some of the earlier results by Sutner [Su95] and Green [@87] on the complexity of

  11. Null-polygonal minimal surfaces in AdS{sub 4} from perturbed W minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Hatsuda, Yasuyuki [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ito, Katsushi [Tokyo Institute of Technology (Japan). Dept. of Physics; Satoh, Yuji [Tsukuba Univ., Sakura, Ibaraki (Japan). Inst. of Physics

    2012-11-15

    We study the null-polygonal minimal surfaces in AdS{sub 4}, which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4){sub 4}/U(1){sup n-5} generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS{sub 3} case.

  12. Random walk generated by random permutations of {1, 2, 3, ..., n + 1}

    International Nuclear Information System (INIS)

    Oshanin, G; Voituriez, R

    2004-01-01

    We study properties of a non-Markovian random walk X (n) l , l = 0, 1, 2, ..., n, evolving in discrete time l on a one-dimensional lattice of integers, whose moves to the right or to the left are prescribed by the rise-and-descent sequences characterizing random permutations π of [n + 1] = {1, 2, 3, ..., n + 1}. We determine exactly the probability of finding the end-point X n = X (n) n of the trajectory of such a permutation-generated random walk (PGRW) at site X, and show that in the limit n → ∞ it converges to a normal distribution with a smaller, compared to the conventional Polya random walk, diffusion coefficient. We formulate, as well, an auxiliary stochastic process whose distribution is identical to the distribution of the intermediate points X (n) l , l < n, which enables us to obtain the probability measure of different excursions and to define the asymptotic distribution of the number of 'turns' of the PGRW trajectories

  13. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.; Spoto, F.; Scollo, Giuseppe; Nijholt, Antinus

    2003-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq 1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  14. Generating all permutations by context-free grammars in Chomsky normal form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2006-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq1$, with

  15. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2004-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  16. Successful attack on permutation-parity-machine-based neural cryptography.

    Science.gov (United States)

    Seoane, Luís F; Ruttor, Andreas

    2012-02-01

    An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.

  17. Refined composite multiscale weighted-permutation entropy of financial time series

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  18. Two-Higgs-doublet models with Minimal Flavour Violation

    International Nuclear Information System (INIS)

    Carlucci, Maria Valentina

    2010-01-01

    The tree-level flavour-changing neutral currents in the two-Higgs-doublet models can be suppressed by protecting the breaking of either flavour or flavour-blind symmetries, but only the first choice, implemented by the application of the Minimal Flavour Violation hypothesis, is stable under quantum corrections. Moreover, a two-Higgs-doublet model with Minimal Flavour Violation enriched with flavour-blind phases can explain the anomalies recently found in the ΔF = 2 transitions, namely the large CP-violating phase in B s mixing and the tension between ε K and S ψKS .

  19. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  20. Analysis of crude oil markets with improved multiscale weighted permutation entropy

    Science.gov (United States)

    Niu, Hongli; Wang, Jun; Liu, Cheng

    2018-03-01

    Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.

  1. Brain computation is organized via power-of-two-based permutation logic

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2016-11-01

    Full Text Available There is considerable scientific interest in understanding how cell assemblies - the long-presumed computational motif - are organized so that the brain can generate cognitive behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N=2i–1, giving rise to the specific-to-general cell-assembly organization capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based computational logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social cognitions. However, modulatory neurons, such as dopaminergic neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact despite the NMDA receptors – the synaptic switch for learning and memory – were deleted throughout adulthood, suggesting that it is likely developmentally pre-configured. Moreover, this logic is implemented in the cortex vertically via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques – which preferentially encode specific and low-combinatorial features and project inter-cortically – is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination, and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the non-randomness in layers 5/6 - which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems – is ideal for robust feedback-control of motivation, emotion, consciousness, and behaviors. These observations suggest that the brain’s basic

  2. Toda theories, W-algebras, and minimal models

    International Nuclear Information System (INIS)

    Mansfield, P.; Spence, B.

    1991-01-01

    We discuss the classical W-algebra symmetries of Toda field theories in terms of the pseudo-differential Lax operator associated with the Toda Lax pair. We then show how the W-algebra transformations can be understood as the non-abelian gauge transformations which preserve the form of the Lax pair. This provides a new understanding of the W-algebras, and we discuss their closure and co-cycle structure using this approach. The quantum Lax operator is investigated, and we show that this operator, which generates the quantum W-algebra currents, is conserved in the conformally extended Toda theories. The W-algebra minimal model primary fields are shown to arise naturally in these theories, leading to the conjecture that the conformally extended Toda theories provide a lagrangian formulation of the W-algebra minimal models. (orig.)

  3. Index of French nuclear literature: IBM 360 programmes for preparing the permuted index of French titles

    International Nuclear Information System (INIS)

    Chonez, Nicole

    1968-12-01

    This report contains the assembly list, the flow chart and some comments about each of the IBM 360 assembler language programmes used for preparing one of the subject indexes contained in the bibliographical bulletin entitled: 'Index de la Litterature nucleaire francaise'; this bulletin has been produced by the French C.E.A. since 1968. Only the processing phases from the magnetic tape file of the bibliographical references, assumed correct, to the printing out of the permuted index obtained with the French titles of the documents on the tape are considered here. This permuted index has the peculiarity of automatically regrouping synonyms and certain grammatical variations of the words. (author) [fr

  4. Model Arrhenius untuk Pendugaan Laju Respirasi Brokoli Terolah Minimal

    Directory of Open Access Journals (Sweden)

    Nurul Imamah

    2016-04-01

    Full Text Available Minimally processed broccoli are perishable product because it still has some metabolism process during the storage period. One of the metabolism process is respiration. Respiration rate is varied depend on the commodity and storage temperature. The purpose of this research are: to review the respiration pattern of minimally processed broccoli during storage period, to study the effect of storage temperature to respiration rate, and to review the correlation between respiration rate and temperature based on Arrhenius model. Broccoli from farming organization “Agro Segar” was processed minimally and then measure the respiration rate. Closed system method is used to measure O2 and CO2 concentration. Minimally processed broccoli is stored at a temperature of 0oC, 5oC, 10oC and 15oC. The experimental design used was completely randomized design of the factors to analyze the rate of respiration. The result shows that broccoli is a climacteric vegetable. It is indicated by the increasing of O2 consumption and CO2 production during senescence phase. The respiration rate increase as high as the increasing of temperature storage. Models Arrhenius can describe correlation between respiration rate and temperature with R2 = 0.953-0.947. The constant value of activation energy (Eai and pre-exponential factor (Roi from Arrhenius model can be used to predict the respiration rate of minimally processed broccoli in every storage temperature

  5. Modular invariance of N=2 minimal models

    International Nuclear Information System (INIS)

    Sidenius, J.

    1991-01-01

    We prove modular covariance of one-point functions at one loop in the diagonal N=2 minimal superconformal models. We use the recently derived general formalism for computing arbitrary conformal blocks in these models. Our result should be sufficient to guarantee modular covariance at arbitrary genus. It is thus an important check on the general formalism which is not manifestly modular covariant. (orig.)

  6. A random regret minimization model of travel choice

    NARCIS (Netherlands)

    Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.

    2008-01-01

    Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel

  7. Permutation entropy with vector embedding delays

    Science.gov (United States)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  8. Multiscale permutation entropy analysis of electrocardiogram

    Science.gov (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao

    2017-04-01

    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  9. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  10. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  11. Minimal quantization of two-dimensional models with chiral anomalies

    International Nuclear Information System (INIS)

    Ilieva, N.

    1987-01-01

    Two-dimensional gauge models with chiral anomalies - ''left-handed'' QED and the chiral Schwinger model, are quantized consistently in the frames of the minimal quantization method. The choice of the cone time as a physical time for system of quantization is motivated. The well-known mass spectrum is found but with a fixed value of the regularization parameter a=2. Such a unique solution is obtained due to the strong requirement of consistency of the minimal quantization that reflects in the physically motivated choice of the time axis

  12. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    Science.gov (United States)

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  13. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Ruochen Liu

    2013-01-01

    Full Text Available The permutation flow shop scheduling problem (PFSSP is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO based memetic algorithm (MPSOMA is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS and individual improvement scheme (IIS. Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA, on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  14. Generalized composite multiscale permutation entropy and Laplacian score based rolling bearing fault diagnosis

    Science.gov (United States)

    Zheng, Jinde; Pan, Haiyang; Yang, Shubao; Cheng, Junsheng

    2018-01-01

    Multiscale permutation entropy (MPE) is a recently proposed nonlinear dynamic method for measuring the randomness and detecting the nonlinear dynamic change of time series and can be used effectively to extract the nonlinear dynamic fault feature from vibration signals of rolling bearing. To solve the drawback of coarse graining process in MPE, an improved MPE method called generalized composite multiscale permutation entropy (GCMPE) was proposed in this paper. Also the influence of parameters on GCMPE and its comparison with the MPE are studied by analyzing simulation data. GCMPE was applied to the fault feature extraction from vibration signal of rolling bearing and then based on the GCMPE, Laplacian score for feature selection and the Particle swarm optimization based support vector machine, a new fault diagnosis method for rolling bearing was put forward in this paper. Finally, the proposed method was applied to analyze the experimental data of rolling bearing. The analysis results show that the proposed method can effectively realize the fault diagnosis of rolling bearing and has a higher fault recognition rate than the existing methods.

  15. Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.

    Science.gov (United States)

    Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang

    2015-01-01

    Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.

  16. Remark on Hopf images in quantum permutation groups $S_n^+$

    OpenAIRE

    Józiak, Paweł

    2016-01-01

    Motivated by a question of A.~Skalski and P.M.~So{\\l}tan about inner faithfulness of the S.~Curran's map, we revisit the results and techniques of T.~Banica and J.~Bichon's Crelle paper and study some group-theoretic properties of the quantum permutation group on $4$ points. This enables us not only to answer the aforementioned question in positive in case $n=4, k=2$, but also to classify the automorphisms of $S_4^+$, describe all the embeddings $O_{-1}(2)\\subset S_4^+$ and show that all the ...

  17. Random defect lines in conformal minimal models

    International Nuclear Information System (INIS)

    Jeng, M.; Ludwig, A.W.W.

    2001-01-01

    We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

  18. Minimal models for axion and neutrino

    Directory of Open Access Journals (Sweden)

    Y.H. Ahn

    2016-01-01

    Full Text Available The PQ mechanism resolving the strong CP problem and the seesaw mechanism explaining the smallness of neutrino masses may be related in a way that the PQ symmetry breaking scale and the seesaw scale arise from a common origin. Depending on how the PQ symmetry and the seesaw mechanism are realized, one has different predictions on the color and electromagnetic anomalies which could be tested in the future axion dark matter search experiments. Motivated by this, we construct various PQ seesaw models which are minimally extended from the (non- supersymmetric Standard Model and thus set up different benchmark points on the axion–photon–photon coupling in comparison with the standard KSVZ and DFSZ models.

  19. Periodical cicadas: A minimal automaton model

    Science.gov (United States)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  20. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  1. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    Science.gov (United States)

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  2. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  3. Design of a magnetic-tunnel-junction-oriented nonvolatile lookup table circuit with write-operation-minimized data shifting

    Science.gov (United States)

    Suzuki, Daisuke; Hanyu, Takahiro

    2018-04-01

    A magnetic-tunnel-junction (MTJ)-oriented nonvolatile lookup table (LUT) circuit, in which a low-power data-shift function is performed by minimizing the number of write operations in MTJ devices is proposed. The permutation of the configuration memory cell for read/write access is performed as opposed to conventional direct data shifting to minimize the number of write operations, which results in significant write energy savings in the data-shift function. Moreover, the hardware cost of the proposed LUT circuit is small since the selector is shared between read access and write access. In fact, the power consumption in the data-shift function and the transistor count are reduced by 82 and 52%, respectively, compared with those in a conventional static random-access memory-based implementation using a 90 nm CMOS technology.

  4. Simultaneous and Sequential MS/MS Scan Combinations and Permutations in a Linear Quadrupole Ion Trap.

    Science.gov (United States)

    Snyder, Dalton T; Szalwinski, Lucas J; Cooks, R Graham

    2017-10-17

    Methods of performing precursor ion scans as well as neutral loss scans in a single linear quadrupole ion trap have recently been described. In this paper we report methodology for performing permutations of MS/MS scan modes, that is, ordered combinations of precursor, product, and neutral loss scans following a single ion injection event. Only particular permutations are allowed; the sequences demonstrated here are (1) multiple precursor ion scans, (2) precursor ion scans followed by a single neutral loss scan, (3) precursor ion scans followed by product ion scans, and (4) segmented neutral loss scans. (5) The common product ion scan can be performed earlier in these sequences, under certain conditions. Simultaneous scans can also be performed. These include multiple precursor ion scans, precursor ion scans with an accompanying neutral loss scan, and multiple neutral loss scans. We argue that the new capability to perform complex simultaneous and sequential MS n operations on single ion populations represents a significant step in increasing the selectivity of mass spectrometry.

  5. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  6. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  7. A Colour Image Encryption Scheme Using Permutation-Substitution Based on Chaos

    Directory of Open Access Journals (Sweden)

    Xing-Yuan Wang

    2015-06-01

    Full Text Available An encryption scheme for colour images using a spatiotemporal chaotic system is proposed. Initially, we use the R, G and B components of a colour plain-image to form a matrix. Then the matrix is permutated by using zigzag path scrambling. The resultant matrix is then passed through a substitution process. Finally, the ciphered colour image is obtained from the confused matrix. Theoretical analysis and experimental results indicate that the proposed scheme is both secure and practical, which make it suitable for encrypting colour images of any size.

  8. A Studentized Permutation Test for the Comparison of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Hahn, Ute

    of empirical K-functions are compared by a permutation test using a studentized test statistic. The proposed test performs convincingly in terms of empirical level and power in a simulation study, even for point patterns where the K-function estimates on neighboring subsamples are not strictly exchangeable....... It also shows improved behavior compared to a test suggested by Diggle et al. (1991, 2000) for the comparison of groups of independently replicated point patterns. In an application to two point patterns from pathology that represent capillary positions in sections of healthy and tumorous tissue, our...

  9. PerMallows: An R Package for Mallows and Generalized Mallows Models

    Directory of Open Access Journals (Sweden)

    Ekhine Irurozki

    2016-08-01

    Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.

  10. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  11. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. The electroweak phase transition in minimal supergravity models

    CERN Document Server

    Nanopoulos, Dimitri V

    1994-01-01

    We have explored the electroweak phase transition in minimal supergravity models by extending previous analysis of the one-loop Higgs potential to include finite temperature effects. Minimal supergravity is characterized by two higgs doublets at the electroweak scale, gauge coupling unification, and universal soft-SUSY breaking at the unification scale. We have searched for the allowed parameter space that avoids washout of baryon number via unsuppressed anomalous Electroweak sphaleron processes after the phase transition. This requirement imposes strong constraints on the Higgs sector. With respect to weak scale baryogenesis, we find that the generic MSSM is {\\it not} phenomenologically acceptable, and show that the additional experimental and consistency constraints of minimal supergravity restricts the mass of the lightest CP-even Higgs even further to $m_h\\lsim 32\\GeV$ (at one loop), also in conflict with experiment. Thus, if supergravity is to allow for baryogenesis via any other mechanism above the weak...

  14. Fermion systems in discrete space-time

    International Nuclear Information System (INIS)

    Finster, Felix

    2007-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure

  15. Fermion systems in discrete space-time

    Energy Technology Data Exchange (ETDEWEB)

    Finster, Felix [NWF I - Mathematik, Universitaet Regensburg, 93040 Regensburg (Germany)

    2007-05-15

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  16. Fermion Systems in Discrete Space-Time

    OpenAIRE

    Finster, Felix

    2006-01-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  17. Fermion systems in discrete space-time

    Science.gov (United States)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  18. Tag-KEM from Set Partial Domain One-Way Permutations

    Science.gov (United States)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  19. A simplified formalism of the algebra of partially transposed permutation operators with applications

    Science.gov (United States)

    Mozrzymas, Marek; Studziński, Michał; Horodecki, Michał

    2018-03-01

    Herein we continue the study of the representation theory of the algebra of permutation operators acting on the n -fold tensor product space, partially transposed on the last subsystem. We develop the concept of partially reduced irreducible representations, which allows us to significantly simplify previously proved theorems and, most importantly, derive new results for irreducible representations of the mentioned algebra. In our analysis we are able to reduce the complexity of the central expressions by getting rid of sums over all permutations from the symmetric group, obtaining equations which are much more handy in practical applications. We also find relatively simple matrix representations for the generators of the underlying algebra. The obtained simplifications and developments are applied to derive the characteristics of a deterministic port-based teleportation scheme written purely in terms of irreducible representations of the studied algebra. We solve an eigenproblem for the generators of the algebra, which is the first step towards a hybrid port-based teleportation scheme and gives us new proofs of the asymptotic behaviour of teleportation fidelity. We also show a connection between the density operator characterising port-based teleportation and a particular matrix composed of an irreducible representation of the symmetric group, which encodes properties of the investigated algebra.

  20. Charged and neutral minimal supersymmetric standard model Higgs ...

    Indian Academy of Sciences (India)

    physics pp. 759–763. Charged and neutral minimal supersymmetric standard model Higgs boson decays and measurement of tan β at the compact linear collider. E CONIAVITIS and A FERRARI∗. Department of Nuclear and Particle Physics, Uppsala University, 75121 Uppsala, Sweden. ∗E-mail: ferrari@tsl.uu.se. Abstract.

  1. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  2. Minimal Adequate Model of Unemployment Duration in the Post-Crisis Czech Republic

    Directory of Open Access Journals (Sweden)

    Adam Čabla

    2016-03-01

    Full Text Available Unemployment is one of the leading economic problems in a developed world. The aim of this paper is to identify the differences in unemployment duration in different strata in the post-crisis Czech Republic via building a minimal adequate model, and to quantify the differences. Data from Labour Force Surveys are used and since they are interval censored in nature, proper metodology must be used. The minimal adequate model is built through the accelerated failure time modelling, maximum likelihood estimates and likelihood ratio tests. Variables at the beginning are sex, marital status, age, education, municipality size and number of persons in a household, containing altogether 29 model parameters. The minimal adequate model contains 5 parameters and differences are found between men and women, the youngest category and the rest and the university educated and the rest. The estimated expected values, variances, medians, modes and 90th percentiles are provided for all subgroups.

  3. From topological strings to minimal models

    International Nuclear Information System (INIS)

    Foda, Omar; Wu, Jian-Feng

    2015-01-01

    We glue four refined topological vertices to obtain the building block of 5D U(2) quiver instanton partition functions. We take the 4D limit of the result to obtain the building block of 4D instanton partition functions which, using the AGT correspondence, are identified with Virasoro conformal blocks. We show that there is a choice of the parameters of the topological vertices that we start with, as well as the parameters and the intermediate states involved in the gluing procedure, such that we obtain Virasoro minimal model conformal blocks.

  4. From topological strings to minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Foda, Omar [School of Mathematics and Statistics, University of Melbourne,Royal Parade, Parkville, VIC 3010 (Australia); Wu, Jian-Feng [Department of Mathematics and Statistics, Henan University,Minglun Street, Kaifeng city, Henan (China); Beijing Institute of Theoretical Physics and Mathematics,3rd Shangdi Street, Beijing (China)

    2015-07-24

    We glue four refined topological vertices to obtain the building block of 5D U(2) quiver instanton partition functions. We take the 4D limit of the result to obtain the building block of 4D instanton partition functions which, using the AGT correspondence, are identified with Virasoro conformal blocks. We show that there is a choice of the parameters of the topological vertices that we start with, as well as the parameters and the intermediate states involved in the gluing procedure, such that we obtain Virasoro minimal model conformal blocks.

  5. Towards tricking a pathogen's protease into fighting infection: the 3D structure of a stable circularly permuted onconase variant cleavedby HIV-1 protease.

    Directory of Open Access Journals (Sweden)

    Mariona Callís

    Full Text Available Onconase® is a highly cytotoxic amphibian homolog of Ribonuclease A. Here, we describe the construction of circularly permuted Onconase® variants by connecting the N- and C-termini of this enzyme with amino acid residues that are recognized and cleaved by the human immunodeficiency virus protease. Uncleaved circularly permuted Onconase® variants are unusually stable, non-cytotoxic and can internalize in human T-lymphocyte Jurkat cells. The structure, stability and dynamics of an intact and a cleaved circularly permuted Onconase® variant were determined by Nuclear Magnetic Resonance spectroscopy and provide valuable insight into the changes in catalytic efficiency caused by the cleavage. The understanding of the structural environment and the dynamics of the activation process represents a first step toward the development of more effective drugs for the treatment of diseases related to pathogens expressing a specific protease. By taking advantage of the protease's activity to initiate a cytotoxic cascade, this approach is thought to be less susceptible to known resistance mechanisms.

  6. Characterization of the permutations by block that have reversible one dimensional cellular automata; Caracterizacion de las permutaciones en bloque que representan automatas celulares unidimensionales reversibles

    Energy Technology Data Exchange (ETDEWEB)

    Seck Tuoh Mora, J. C. [Instituto Politecnico Nacional, Mexico, D. F. (Mexico)

    2001-06-01

    We present a review of reversible one dimensional cellular automata and their representation by block permutations. We analyze in detail the behavior of such block permutations to get their characterization. [Spanish] En el siguiente escrito se da una revision a la representacion y comportamiento de automatas celulares unidimensionales reversibles por medio de permutaciones en bloque. Hacemos un analisis detallado del comportamiento de dichas permutaciones para obtener su caracterizacion.

  7. Discriminating chaotic and stochastic dynamics through the permutation spectrum test

    Energy Technology Data Exchange (ETDEWEB)

    Kulp, C. W., E-mail: Kulp@lycoming.edu [Department of Astronomy and Physics, Lycoming College, Williamsport, Pennsylvania 17701 (United States); Zunino, L., E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata—CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina)

    2014-09-01

    In this paper, we propose a new heuristic symbolic tool for unveiling chaotic and stochastic dynamics: the permutation spectrum test. Several numerical examples allow us to confirm the usefulness of the introduced methodology. Indeed, we show that it is robust in situations in which other techniques fail (intermittent chaos, hyperchaotic dynamics, stochastic linear and nonlinear correlated dynamics, and deterministic non-chaotic noise-driven dynamics). We illustrate the applicability and reliability of this pragmatic method by examining real complex time series from diverse scientific fields. Taking into account that the proposed test has the advantages of being conceptually simple and computationally fast, we think that it can be of practical utility as an alternative test for determinism.

  8. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  9. ATLAS Z Excess in Minimal Supersymmetric Standard Model

    International Nuclear Information System (INIS)

    Lu, Xiaochuan; Terada, Takahiro

    2015-06-01

    Recently the ATLAS collaboration reported a 3 sigma excess in the search for the events containing a dilepton pair from a Z boson and large missing transverse energy. Although the excess is not sufficiently significant yet, it is quite tempting to explain this excess by a well-motivated model beyond the standard model. In this paper we study a possibility of the minimal supersymmetric standard model (MSSM) for this excess. Especially, we focus on the MSSM spectrum where the sfermions are heavier than the gauginos and Higgsinos. We show that the excess can be explained by the reasonable MSSM mass spectrum.

  10. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  11. Minimal model for spoof acoustoelastic surface states

    Directory of Open Access Journals (Sweden)

    J. Christensen

    2014-12-01

    Full Text Available Similar to textured perfect electric conductors for electromagnetic waves sustaining artificial or spoof surface plasmons we present an equivalent phenomena for the case of sound. Aided by a minimal model that is able to capture the complex wave interaction of elastic cavity modes and airborne sound radiation in perfect rigid panels, we construct designer acoustoelastic surface waves that are entirely controlled by the geometrical environment. Comparisons to results obtained by full-wave simulations confirm the feasibility of the model and we demonstrate illustrative examples such as resonant transmissions and waveguiding to show a few examples of many where spoof elastic surface waves are useful.

  12. A golden A5 model of leptons with a minimal NLO correction

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Stuart, Alexander J.

    2013-01-01

    We propose a new A 5 model of leptons which corrects the LO predictions of Golden Ratio mixing via a minimal NLO Majorana mass correction which completely breaks the original Klein symmetry of the neutrino mass matrix. The minimal nature of the NLO correction leads to a restricted and correlated range of the mixing angles allowing agreement within the one sigma range of recent global fits following the reactor angle measurement by Daya Bay and RENO. The minimal NLO correction also preserves the LO inverse neutrino mass sum rule leading to a neutrino mass spectrum that extends into the quasi-degenerate region allowing the model to be accessible to the current and future neutrinoless double beta decay experiments

  13. Non-minimal supersymmetric models. LHC phenomenolgy and model discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Krauss, Manuel Ernst

    2015-12-18

    It is generally agreed upon the fact that the Standard Model of particle physics can only be viewed as an effective theory that needs to be extended as it leaves some essential questions unanswered. The exact realization of the necessary extension is subject to discussion. Supersymmetry is among the most promising approaches to physics beyond the Standard Model as it can simultaneously solve the hierarchy problem and provide an explanation for the dark matter abundance in the universe. Despite further virtues like gauge coupling unification and radiative electroweak symmetry breaking, minimal supersymmetric models cannot be the ultimate answer to the open questions of the Standard Model as they still do not incorporate neutrino masses and are besides heavily constrained by LHC data. This does, however, not derogate the beauty of the concept of supersymmetry. It is therefore time to explore non-minimal supersymmetric models which are able to close these gaps, review their consistency, test them against experimental data and provide prospects for future experiments. The goal of this thesis is to contribute to this process by exploring an extraordinarily well motivated class of models which bases upon a left-right symmetric gauge group. While relaxing the tension with LHC data, those models automatically include the ingredients for neutrino masses. We start with a left-right supersymmetric model at the TeV scale in which scalar SU(2){sub R} triplets are responsible for the breaking of left-right symmetry as well as for the generation of neutrino masses. Although a tachyonic doubly-charged scalar is present at tree-level in this kind of models, we show by performing the first complete one-loop evaluation that it gains a real mass at the loop level. The constraints on the predicted additional charged gauge bosons are then evaluated using LHC data, and we find that we can explain small excesses in the data of which the current LHC run will reveal if they are actual new

  14. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  15. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  16. Lorentz Invariant Spectrum of Minimal Chiral Schwinger Model

    Science.gov (United States)

    Kim, Yong-Wan; Kim, Seung-Kook; Kim, Won-Tae; Park, Young-Jai; Kim, Kee Yong; Kim, Yongduk

    We study the Lorentz transformation of the minimal chiral Schwinger model in terms of the alternative action. We automatically obtain a chiral constraint, which is equivalent to the frame constraint introduced by McCabe, in order to solve the frame problem in phase space. As a result we obtain the Lorentz invariant spectrum in any moving frame by choosing a frame parameter.

  17. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  18. A Minimal Cognitive Model for Translating and Post-editing

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2017-01-01

    This study investigates the coordination of reading (input) and writing (output) activities in from-scratch translation and post-editing. We segment logged eye movements and keylogging data into minimal units of reading and writing activity and model the process of post-editing and from-scratch t...

  19. N=2 Minimal Conformal Field Theories and Matrix Bifactorisations of x d

    Science.gov (United States)

    Davydov, Alexei; Camacho, Ana Ros; Runkel, Ingo

    2018-01-01

    We establish an action of the representations of N = 2-superconformal symmetry on the category of matrix factorisations of the potentials x d and x d - y d , for d odd. More precisely we prove a tensor equivalence between (a) the category of Neveu-Schwarz-type representations of the N = 2 minimal super vertex operator algebra at central charge 3-6/d, and (b) a full subcategory of graded matrix factorisations of the potential x d - y d . The subcategory in (b) is given by permutation-type matrix factorisations with consecutive index sets. The physical motivation for this result is the Landau-Ginzburg/conformal field theory correspondence, where it amounts to the equivalence of a subset of defects on both sides of the correspondence. Our work builds on results by Brunner and Roggenkamp [BR], where an isomorphism of fusion rules was established.

  20. A Minimal Model to Explore the Influence of Distant Modes on Mode-Coupling Instabilities

    Science.gov (United States)

    Kruse, Sebastian; Hoffmann, Norbert

    2010-09-01

    The phenomenon of mode-coupling instability is one of the most frequently explored mechanisms to explain self-excited oscillation in sliding systems with friction. A mode coupling instability is usually due to the coupling of two modes. However, further modes can have an important influence on the coupling of two modes. This work extends a well-known minimal model to describe mode-coupling instabilities in order to explore the influence of a distant mode on the classical mode-coupling pattern. This work suggests a new minimal model. The model is explored and it is shown that a third mode can have significant influence on the classical mode-coupling instabilities where two modes are coupling. Different phenomena are analysed and it is pointed out that distant modes can only be ignored in very special cases and that the onset friction-induced oscillations can even be very sensitive to minimal variation of a distant mode. Due to the chosen academic minimal-model and the abandonment of a complex Finite-Element model the insight stays rather phenomenological but a better understanding of the mode-coupling mechnanism can be gained.

  1. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  2. Minimal extension of the standard model scalar sector

    International Nuclear Information System (INIS)

    O'Connell, Donal; Wise, Mark B.; Ramsey-Musolf, Michael J.

    2007-01-01

    The minimal extension of the scalar sector of the standard model contains an additional real scalar field with no gauge quantum numbers. Such a field does not couple to the quarks and leptons directly but rather through its mixing with the standard model Higgs field. We examine the phenomenology of this model focusing on the region of parameter space where the new scalar particle is significantly lighter than the usual Higgs scalar and has small mixing with it. In this region of parameter space most of the properties of the additional scalar particle are independent of the details of the scalar potential. Furthermore the properties of the scalar that is mostly the standard model Higgs can be drastically modified since its dominant branching ratio may be to a pair of the new lighter scalars

  3. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  4. Neutral Higgs bosons in the standard model and in the minimal ...

    Indian Academy of Sciences (India)

    assumed to be CP invariant. Finally, we discuss an alternative MSSM scenario including. CP violation in the Higgs sector. Keywords. Higgs bosons; standard model; minimal supersymmetric model; searches at LEP. 1. Introduction. One of the challenges in high-energy particle physics is the discovery of Higgs bosons.

  5. A minimal supersymmetric model of particle physics and the early universe

    International Nuclear Information System (INIS)

    Buchmueller, W.; Domcke, V.; Kamada, K.; Schmitz, K.

    2013-11-01

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local B-L, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken B-L symmetry, which ends in tachyonic preheating, i.e. the decay of the false vacuum, followed by a matter dominated phase with heavy B-L Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of gravitational waves.

  6. A minimal supersymmetric model of particle physics and the early universe

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, W.; Domcke, V.; Kamada, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Schmitz, K. [Tokyo Univ., Kashiwa (Japan). Kavli IPMU, TODIAS

    2013-11-15

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local B-L, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken B-L symmetry, which ends in tachyonic preheating, i.e. the decay of the false vacuum, followed by a matter dominated phase with heavy B-L Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of gravitational waves.

  7. A Minimal Supersymmetric Model of Particle Physics and the Early Universe

    CERN Document Server

    Buchmüller, W; Kamada, K; Schmitz, K

    2014-01-01

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local $B$$-$$L$, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken $B$$-$$L$ symmetry, which ends in tachyonic preheating, i.e.\\ the decay of the false vacuum, followed by a matter dominated phase with heavy $B$$-$$L$ Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of g...

  8. Aplicación de un algoritmo ACO al problema de taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan An ant colony algorithm for the permutation flowshop with sequence dependent setup times and makespan minimization

    Directory of Open Access Journals (Sweden)

    Eduardo Salazar Hornig

    2011-08-01

    Full Text Available En este trabajo se estudió el problema de secuenciamiento de trabajos en el taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan. Para ello se propuso un algoritmo de optimización mediante colonia de hormigas (ACO, llevando el problema original a una estructura semejante al problema del vendedor viajero TSP (Traveling Salesman Problem asimétrico, utilizado para su evaluación problemas propuestos en la literatura y se compara con una adaptación de la heurística NEH (Nawaz-Enscore-Ham. Posteriormente se aplica una búsqueda en vecindad a la solución obtenida tanto por ACO como NEH.This paper studied the permutation flowshop with sequence dependent setup times and makespan minimization. An ant colony algorithm which turns the original problem into an asymmetric TSP (Traveling Salesman Problem structure is presented, and applied to problems proposed in the literature and is compared with an adaptation of the NEH heuristic. Subsequently a neighborhood search was applied to the solution obtained by the ACO algorithm and the NEH heuristic.

  9. Minimal $R+R^2$ Supergravity Models of Inflation Coupled to Matter

    CERN Document Server

    Ferrara, S

    2014-01-01

    The supersymmetric extension of "Starobinsky" $R+\\alpha R^2$ models of inflation is particularly simple in the "new minimal" formalism of supergravity, where the inflaton has no scalar superpartners. This paper is devoted to matter couplings in such supergravity models. We show how in the new minimal formalism matter coupling presents certain features absent in other formalisms. In particular, for the large class of matter couplings considered in this paper, matter must possess an R-symmetry, which is gauged by the vector field which becomes dynamical in the "new minimal" completion of the $R+\\alpha R^2$ theory. Thus, in the dual formulation of the theory, where the gauge vector is part of a massive vector multiplet, the inflaton is the superpartner of the massive vector of a nonlinearly realized R-symmetry. The F-term potential of this theory is of no-scale type, while the inflaton potential is given by the D-term of the gauged R-symmetry. The absolute minimum of the potential is always exactly supersymmetri...

  10. Newton's constant from a minimal length: additional models

    International Nuclear Information System (INIS)

    Sahlmann, Hanno

    2011-01-01

    We follow arguments of Verlinde (2010 arXiv:1001.0785 [hep-th]) and Klinkhamer (2010 arXiv:1006.2094 [hep-th]), and construct two models of the microscopic theory of a holographic screen that allow for the thermodynamical derivation of Newton's law, with Newton's constant expressed in terms of a minimal length scale l contained in the area spectrum of the microscopic theory. One of the models is loosely related to the quantum structure of surfaces and isolated horizons in loop quantum gravity. Our investigation shows that the conclusions reached by Klinkhamer regarding the new length scale l seem to be generic in all their qualitative aspects.

  11. Electromyographic permutation entropy quantifies diaphragmatic denervation and reinnervation.

    Directory of Open Access Journals (Sweden)

    Christopher Kramer

    Full Text Available Spontaneous reinnervation after diaphragmatic paralysis due to trauma, surgery, tumors and spinal cord injuries is frequently observed. A possible explanation could be collateral reinnervation, since the diaphragm is commonly double-innervated by the (accessory phrenic nerve. Permutation entropy (PeEn, a complexity measure for time series, may reflect a functional state of neuromuscular transmission by quantifying the complexity of interactions across neural and muscular networks. In an established rat model, electromyographic signals of the diaphragm after phrenicotomy were analyzed using PeEn quantifying denervation and reinnervation. Thirty-three anesthetized rats were unilaterally phrenicotomized. After 1, 3, 9, 27 and 81 days, diaphragmatic electromyographic PeEn was analyzed in vivo from sternal, mid-costal and crural areas of both hemidiaphragms. After euthanasia of the animals, both hemidiaphragms were dissected for fiber type evaluation. The electromyographic incidence of an accessory phrenic nerve was 76%. At day 1 after phrenicotomy, PeEn (normalized values was significantly diminished in the sternal (median: 0.69; interquartile range: 0.66-0.75 and mid-costal area (0.68; 0.66-0.72 compared to the non-denervated side (0.84; 0.78-0.90 at threshold p<0.05. In the crural area, innervated by the accessory phrenic nerve, PeEn remained unchanged (0.79; 0.72-0.86. During reinnervation over 81 days, PeEn normalized in the mid-costal area (0.84; 0.77-0.86, whereas it remained reduced in the sternal area (0.77; 0.70-0.81. Fiber type grouping, a histological sign for reinnervation, was found in the mid-costal area in 20% after 27 days and in 80% after 81 days. Collateral reinnervation can restore diaphragm activity after phrenicotomy. Electromyographic PeEn represents a new, distinctive assessment characterizing intramuscular function following denervation and reinnervation.

  12. Structural consequences of cutting a binding loop: two circularly permuted variants of streptavidin

    International Nuclear Information System (INIS)

    Le Trong, Isolde; Chu, Vano; Xing, Yi; Lybrand, Terry P.; Stayton, Patrick S.; Stenkamp, Ronald E.

    2013-01-01

    The crystal structures of two circularly permuted streptavidins probe the role of a flexible loop in the tight binding of biotin. Molecular-dynamics calculations for one of the mutants suggests that increased fluctuations in a hydrogen bond between the protein and biotin are associated with cleavage of the binding loop. Circular permutation of streptavidin was carried out in order to investigate the role of a main-chain amide in stabilizing the high-affinity complex of the protein and biotin. Mutant proteins CP49/48 and CP50/49 were constructed to place new N-termini at residues 49 and 50 in a flexible loop involved in stabilizing the biotin complex. Crystal structures of the two mutants show that half of each loop closes over the binding site, as observed in wild-type streptavidin, while the other half adopts the open conformation found in the unliganded state. The structures are consistent with kinetic and thermodynamic data and indicate that the loop plays a role in enthalpic stabilization of the bound state via the Asn49 amide–biotin hydrogen bond. In wild-type streptavidin, the entropic penalties of immobilizing a flexible portion of the protein to enhance binding are kept to a manageable level by using a contiguous loop of medium length (six residues) which is already constrained by its anchorage to strands of the β-barrel protein. A molecular-dynamics simulation for CP50/49 shows that cleavage of the binding loop results in increased structural fluctuations for Ser45 and that these fluctuations destabilize the streptavidin–biotin complex

  13. Flocking with minimal cooperativity: the panic model.

    Science.gov (United States)

    Pilkiewicz, Kevin R; Eaves, Joel D

    2014-01-01

    We present a two-dimensional lattice model of self-propelled spins that can change direction only upon collision with another spin. We show that even with ballistic motion and minimal cooperativity, these spins display robust flocking behavior at nearly all densities, forming long bands of stripes. The structural transition in this system is not a thermodynamic phase transition, but it can still be characterized by an order parameter, and we demonstrate that if this parameter is studied as a dynamical variable rather than a steady-state observable, we can extract a detailed picture of how the flocking mechanism varies with density.

  14. On SW-minimal models and N=1 supersymmetric quantum Toda-field theories

    International Nuclear Information System (INIS)

    Mallwitz, S.

    1994-04-01

    Integrable N=1 supersymmetric Toda-field theories are determined by a contragredient simple Super-Lie-Algebra (SSLS) with purely fermionic lowering and raising operators. For the SSLA's Osp(3/2) and D(2/1;α) we construct explicitly the higher spin conserved currents and obtain free field representations of the super W-algebras SW(3/2,2) and SW(3/2,3/2,2). In constructing the corresponding series of minimal models using covariant vertex operators, we find a necessary restriction on the Cartan matrix of the SSLA, also for the general case. Within this framework, this restriction claims that there be a minimum of one non-vanishing element on the diagonal of the Cartan matrix. This condition is without parallel in bosonic conformal field theory. As a consequence only two series of SSLA's yield minimal models, namely Osp(2n/2n-1) and Osp(2n/2n+1). Subsequently some general aspects of degenerate representations of SW-algebras, notably the fusion rules, are investigated. As an application we discuss minimal models of SW(3/2, 2), which were constructed with independent methods, in this framework. Covariant formulation is used throughout this paper. (orig.)

  15. Phenomenological study of in the minimal model at LHC

    Indian Academy of Sciences (India)

    K M Balasubramaniam

    2017-10-05

    Oct 5, 2017 ... Phenomenological study of Z in the minimal B − L model at LHC ... The phenomenological study of neutral heavy gauge boson (Z. B−L) of the ...... JHEP10(2015)076, arXiv:1506.06767 [hep-ph] ... [15] ATLAS Collaboration: G Aad et al, Phys. Rev. D 90(5) ... [19] C W Chiang, N D Christensen, G J Ding and T.

  16. Predictions for mt and MW in minimal supersymmetric models

    International Nuclear Information System (INIS)

    Buchmueller, O.; Ellis, J.R.; Flaecher, H.; Isidori, G.

    2009-12-01

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m t , and the W boson mass, m W . We find that the supersymmetric predictions for both m t and m W , obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  17. Mass textures and wolfenstein parameters from breaking the flavour permutational symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Mondragon, A; Rivera, T. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico,Mexico D.F. (Mexico); Rodriguez Jauregui, E. [Deutsches Elekronen-Synchrotron, Theory Group, Hamburg (Germany)

    2001-12-01

    We will give an overview of recent progress in the phenomenological study of quark mass matrices, quark flavour mixings and CP-violation with emphasis on the possibility of an underlying discrete, flavour permutational symmetry and its breaking, from which realistic models of mass generation could be built. The quark mixing angles and CP-violating phase, as well as the Wolfenstein parameters are given in terms of four quark mass ratios and only two parameters (Z{sup 1}/2, {phi}) characterizing the symmetry breaking pattern. Excellent agreement with all current experimental data is found. [Spanish] Daremos una visita panoramica del progreso reciente en el estudio fenomenologico de las matrices de masas y de mezclas del sabor de los quarks y la violacion de PC, con enfasis en la posibilidad de que, subyacentes al problema, se halle una simetria discreta, permutacional del sabor y su rompimiento a partir de las cuales se puedan construir modelos realistas de la generacion de las masas. Los angulos de mezcla de los quarks y la fase que viola CP, asi como los parametros de Wolfenstein se dan en terminos de cuatro razones de masas de los quarks y solamente dos parametros (Z{sup 1}/2, {phi}) que caracterizan el patron del rompimiento de la simetria. Los resultados se encuentran en excelente acuerdo con todos los datos experimentales mas recientes.

  18. On relevant boundary perturbations of unitary minimal models

    International Nuclear Information System (INIS)

    Recknagel, A.; Roggenkamp, D.; Schomerus, V.

    2000-01-01

    We consider unitary Virasoro minimal models on the disk with Cardy boundary conditions and discuss deformations by certain relevant boundary operators, analogous to tachyon condensation in string theory. Concentrating on the least relevant boundary field, we can perform a perturbative analysis of renormalization group fixed points. We find that the systems always flow towards stable fixed points which admit no further (non-trivial) relevant perturbations. The new conformal boundary conditions are in general given by superpositions of 'pure' Cardy boundary conditions

  19. Non-minimal Maxwell-Chern-Simons theory and the composite Fermion model

    International Nuclear Information System (INIS)

    Paschoal, Ricardo C.; Helayel Neto, Jose A.

    2003-01-01

    The magnetic field redefinition in Jain's composite fermion model for the fractional quantum Hall effect is shown to be effective described by a mean-field approximation of a model containing a Maxwell-Chern-Simons gauge field nominally coupled to matter. Also an explicit non-relativistic limit of the non-minimal (2+1) D Dirac's equation is derived. (author)

  20. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  1. Surface states of a system of Dirac fermions: A minimal model

    International Nuclear Information System (INIS)

    Volkov, V. A.; Enaldiev, V. V.

    2016-01-01

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  2. Surface states of a system of Dirac fermions: A minimal model

    Energy Technology Data Exchange (ETDEWEB)

    Volkov, V. A., E-mail: volkov.v.a@gmail.com; Enaldiev, V. V. [Russian Academy of Sciences, Kotel’nikov Institute of Radio Engineering and Electronics (Russian Federation)

    2016-03-15

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  3. Phenomenology of non-minimal supersymmetric models at linear colliders

    International Nuclear Information System (INIS)

    Porto, Stefano

    2015-06-01

    The focus of this thesis is on the phenomenology of several non-minimal supersymmetric models in the context of future linear colliders (LCs). Extensions of the minimal supersymmetric Standard Model (MSSM) may accommodate the observed Higgs boson mass at about 125 GeV in a more natural way than the MSSM, with a richer phenomenology. We consider both F-term extensions of the MSSM, as for instance the non-minimal supersymmetric Standard Model (NMSSM), as well as D-terms extensions arising at low energies from gauge extended supersymmetric models. The NMSSM offers a solution to the μ-problem with an additional gauge singlet supermultiplet. The enlarged neutralino sector of the NMSSM can be accurately studied at a LC and used to distinguish the model from the MSSM. We show that exploiting the power of the polarised beams of a LC can be used to reconstruct the neutralino and chargino sector and eventually distinguish the NMSSM even considering challenging scenarios that resemble the MSSM. Non-decoupling D-terms extensions of the MSSM can raise the tree-level Higgs mass with respect to the MSSM. This is done through additional contributions to the Higgs quartic potential, effectively generated by an extended gauge group. We study how this can happen and we show how these additional non-decoupling D-terms affect the SM-like Higgs boson couplings to fermions and gauge bosons. We estimate how the deviations from the SM couplings can be spotted at the Large Hadron Collider (LHC) and at the International Linear Collider (ILC), showing how the ILC would be suitable for the model identication. Since our results prove that a linear collider is a fundamental machine for studying supersymmetry phenomenology at a high level of precision, we argue that also a thorough comprehension of the physics at the interaction point (IP) of a LC is needed. Therefore, we finally consider the possibility of observing intense electromagnetic field effects and nonlinear quantum electrodynamics

  4. A minimal spatial cell lineage model of epithelium: tissue stratification and multi-stability

    Science.gov (United States)

    Yeh, Wei-Ting; Chen, Hsuan-Yi

    2018-05-01

    A minimal model which includes spatial and cell lineage dynamics for stratified epithelia is presented. The dependence of tissue steady state on cell differentiation models, cell proliferation rate, cell differentiation rate, and other parameters are studied numerically and analytically. Our minimal model shows some important features. First, we find that morphogen or mechanical stress mediated interaction is necessary to maintain a healthy stratified epithelium. Furthermore, comparing with tissues in which cell differentiation can take place only during cell division, tissues in which cell division and cell differentiation are decoupled can achieve relatively higher degree of stratification. Finally, our model also shows that in the presence of short-range interactions, it is possible for a tissue to have multiple steady states. The relation between our results and tissue morphogenesis or lesion is discussed.

  5. Efficiency and credit ratings: a permutation-information-theory analysis

    International Nuclear Information System (INIS)

    Bariviera, Aurelio Fernandez; Martinez, Lisana B; Zunino, Luciano; Belén Guercio, M; Rosso, Osvaldo A

    2013-01-01

    The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity–entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification. (paper)

  6. Minimal time spiking in various ChR2-controlled neuron models.

    Science.gov (United States)

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  7. Computational fitness landscape for all gene-order permutations of an RNA virus.

    Directory of Open Access Journals (Sweden)

    Kwang-il Lim

    2009-02-01

    Full Text Available How does the growth of a virus depend on the linear arrangement of genes in its genome? Answering this question may enhance our basic understanding of virus evolution and advance applications of viruses as live attenuated vaccines, gene-therapy vectors, or anti-tumor therapeutics. We used a mathematical model for vesicular stomatitis virus (VSV, a prototype RNA virus that encodes five genes (N-P-M-G-L, to simulate the intracellular growth of all 120 possible gene-order variants. Simulated yields of virus infection varied by 6,000-fold and were found to be most sensitive to gene-order permutations that increased levels of the L gene transcript or reduced levels of the N gene transcript, the lowest and highest expressed genes of the wild-type virus, respectively. Effects of gene order on virus growth also depended upon the host-cell environment, reflecting different resources for protein synthesis and different cell susceptibilities to infection. Moreover, by computationally deleting intergenic attenuations, which define a key mechanism of transcriptional regulation in VSV, the variation in growth associated with the 120 gene-order variants was drastically narrowed from 6,000- to 20-fold, and many variants produced higher progeny yields than wild-type. These results suggest that regulation by intergenic attenuation preceded or co-evolved with the fixation of the wild type gene order in the evolution of VSV. In summary, our models have begun to reveal how gene functions, gene regulation, and genomic organization of viruses interact with their host environments to define processes of viral growth and evolution.

  8. Minimal Z′ models and the 125 GeV Higgs boson

    International Nuclear Information System (INIS)

    Basso, L.

    2013-01-01

    The 1-loop renormalization group equations for the minimal Z ′ models encompassing a type-I seesaw mechanism are studied in the light of the 125 GeV Higgs boson observation. This model is taken as a benchmark for the general case of singlet extensions of the standard model. The most important result is that negative scalar mixing angles are favored with respect to positive values. Further, a minimum value for the latter exists, as well as a maximum value for the masses of the heavy neutrinos, depending on the vacuum expectation value of the singlet scalar

  9. A novel minimal invasive mouse model of extracorporeal circulation.

    Science.gov (United States)

    Luo, Shuhua; Tang, Menglin; Du, Lei; Gong, Lina; Xu, Jin; Chen, Youwen; Wang, Yabo; Lin, Ke; An, Qi

    2015-01-01

    Extracorporeal circulation (ECC) is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n = 20) survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury.

  10. A Novel Minimal Invasive Mouse Model of Extracorporeal Circulation

    Directory of Open Access Journals (Sweden)

    Shuhua Luo

    2015-01-01

    Full Text Available Extracorporeal circulation (ECC is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n=20 survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury.

  11. Steam consumption minimization model in a multiple evaporation effect in a sugar plant

    International Nuclear Information System (INIS)

    Villada, Fernando; Valencia, Jaime A; Moreno, German; Murillo, J. Joaquin

    1992-01-01

    In this work, a mathematical model to minimize the steam consumption in a multiple effect evaporation system is shown. The model is based in the dynamic programming technique and the results are tested in a Colombian sugar mill

  12. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  13. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  14. On optimal (non-Trojan) semi-Latin squares with side n and block size n: Construction procedure and admissible permutations

    International Nuclear Information System (INIS)

    Chigbu, P.E.; Ukekwe, E.C.; Ikekeonwu, G.A.M.

    2006-12-01

    There is a special family of the (n x n)/k semi-Latin squares called the Trojan squares which are optimal among semi-Latin squares of equivalent sizes. Unfortunately, Trojan squares do not exist for all k; for instance, there is no Trojan square for k ≥ n. However, the need usually arises for constructing optimal semi-Latin squares where no Trojan squares exist. Bailey made a conjecture on optimal semi-Latin squares for k ≥ n and based on this conjecture, optimal non-Trojan semi-Latin squares are here constructed for k = n, considering the inherent Trojan squares for k < n. A lemma substantiating this conjecture for k = n is given and proved. In addition, the properties for the admissible permutation sets used in constructing these optimal squares are made evident based on the systematic-group-theoretic algorithm of Bailey and Chigbu. Algorithms for identifying the admissible permutations as well as constructing the optimal non-Trojan (n x n)/k = n semi-Latin squares for odd n and n = 4 are given. (author)

  15. Widespread occurrence of organelle genome-encoded 5S rRNAs including permuted molecules.

    Science.gov (United States)

    Valach, Matus; Burger, Gertraud; Gray, Michael W; Lang, B Franz

    2014-12-16

    5S Ribosomal RNA (5S rRNA) is a universal component of ribosomes, and the corresponding gene is easily identified in archaeal, bacterial and nuclear genome sequences. However, organelle gene homologs (rrn5) appear to be absent from most mitochondrial and several chloroplast genomes. Here, we re-examine the distribution of organelle rrn5 by building mitochondrion- and plastid-specific covariance models (CMs) with which we screened organelle genome sequences. We not only recover all organelle rrn5 genes annotated in GenBank records, but also identify more than 50 previously unrecognized homologs in mitochondrial genomes of various stramenopiles, red algae, cryptomonads, malawimonads and apusozoans, and surprisingly, in the apicoplast (highly derived plastid) genomes of the coccidian pathogens Toxoplasma gondii and Eimeria tenella. Comparative modeling of RNA secondary structure reveals that mitochondrial 5S rRNAs from brown algae adopt a permuted triskelion shape that has not been seen elsewhere. Expression of the newly predicted rrn5 genes is confirmed experimentally in 10 instances, based on our own and published RNA-Seq data. This study establishes that particularly mitochondrial 5S rRNA has a much broader taxonomic distribution and a much larger structural variability than previously thought. The newly developed CMs will be made available via the Rfam database and the MFannot organelle genome annotator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. A permutation testing framework to compare groups of brain networks.

    Science.gov (United States)

    Simpson, Sean L; Lyday, Robert G; Hayasaka, Satoru; Marsh, Anthony P; Laurienti, Paul J

    2013-01-01

    Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.

  17. A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis

    KAUST Repository

    Kannan, Venkateshan; Kiani, Narsis A.; Piehl, Fredrik; Tegner, Jesper

    2017-01-01

    Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving

  18. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  19. A permutationally invariant full-dimensional ab initio potential energy surface for the abstraction and exchange channels of the H + CH4 system

    International Nuclear Information System (INIS)

    Li, Jun; Chen, Jun; Zhao, Zhiqiang; Zhang, Dong H.; Xie, Daiqian; Guo, Hua

    2015-01-01

    We report a permutationally invariant global potential energy surface (PES) for the H + CH 4 system based on ∼63 000 data points calculated at a high ab initio level (UCCSD(T)-F12a/AVTZ) using the recently proposed permutation invariant polynomial-neural network method. The small fitting error (5.1 meV) indicates a faithful representation of the ab initio points over a large configuration space. The rate coefficients calculated on the PES using tunneling corrected transition-state theory and quasi-classical trajectory are found to agree well with the available experimental and previous quantum dynamical results. The calculated total reaction probabilities (J tot = 0) including the abstraction and exchange channels using the new potential by a reduced dimensional quantum dynamic method are essentially the same as those on the Xu-Chen-Zhang PES [Chin. J. Chem. Phys. 27, 373 (2014)

  20. CP asymmetry in tau slepton decay in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Yang Weimin; Du Dongsheng

    2002-01-01

    We investigate CP violation asymmetry in the decay of a tau slepton into a tau neutrino and a chargino in the minimal supersymmetric standard model. The new source of CP violation is the complex mixing in the tau slepton sector. The rate asymmetry between the decays of the tau slepton and its CP conjugate process can be of the order of 10 -3 in some region of the parameter space of the minimal supergravity scenario, which will possibly be detectable in near-future collider experiments

  1. The minimal linear σ model for the Goldstone Higgs

    International Nuclear Information System (INIS)

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  2. Solar system tests for realistic f(T) models with non-minimal torsion-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Rui-Hui; Zhai, Xiang-Hua; Li, Xin-Zhou [Shanghai Normal University, Shanghai United Center for Astrophysics (SUCA), Shanghai (China)

    2017-08-15

    In the previous paper, we have constructed two f(T) models with non-minimal torsion-matter coupling extension, which are successful in describing the evolution history of the Universe including the radiation-dominated era, the matter-dominated era, and the present accelerating expansion. Meantime, the significant advantage of these models is that they could avoid the cosmological constant problem of ΛCDM. However, the non-minimal coupling between matter and torsion will affect the tests of the Solar system. In this paper, we study the effects of the Solar system in these models, including the gravitation redshift, geodetic effect and perihelion precession. We find that Model I can pass all three of the Solar system tests. For Model II, the parameter is constrained by the uncertainties of the planets' estimated perihelion precessions. (orig.)

  3. Triviality bound on lightest Higgs mass in next to minimal supersymmetric model

    International Nuclear Information System (INIS)

    Choudhury, S.R.; Mamta; Dutta, Sukanta

    1998-01-01

    We study the implication of triviality on Higgs sector in next to minimal supersymmetric model (NMSSM) using variational field theory. It is shown that the mass of the lightest Higgs boson in NMSSM has an upper bound ∼ 10 M w which is of the same order as that in the standard model. (author)

  4. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    Science.gov (United States)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  5. Minimal but non-minimal inflation and electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Marzola, Luca [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu (Estonia); Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2016-10-07

    We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.

  6. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2016-05-01

    Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the

  7. A minimal model of predator-swarm interactions.

    Science.gov (United States)

    Chen, Yuxin; Kolokolnikov, Theodore

    2014-05-06

    We propose a minimal model of predator-swarm interactions which captures many of the essential dynamics observed in nature. Different outcomes are observed depending on the predator strength. For a 'weak' predator, the swarm is able to escape the predator completely. As the strength is increased, the predator is able to catch up with the swarm as a whole, but the individual prey is able to escape by 'confusing' the predator: the prey forms a ring with the predator at the centre. For higher predator strength, complex chasing dynamics are observed which can become chaotic. For even higher strength, the predator is able to successfully capture the prey. Our model is simple enough to be amenable to a full mathematical analysis, which is used to predict the shape of the swarm as well as the resulting predator-prey dynamics as a function of model parameters. We show that, as the predator strength is increased, there is a transition (owing to a Hopf bifurcation) from confusion state to chasing dynamics, and we compute the threshold analytically. Our analysis indicates that the swarming behaviour is not helpful in avoiding the predator, suggesting that there are other reasons why the species may swarm. The complex shape of the swarm in our model during the chasing dynamics is similar to the shape of a flock of sheep avoiding a shepherd.

  8. Permutation entropy analysis of financial time series based on Hill's diversity number

    Science.gov (United States)

    Zhang, Yali; Shang, Pengjian

    2017-12-01

    In this paper the permutation entropy based on Hill's diversity number (Nn,r) is introduced as a new way to assess the complexity of a complex dynamical system such as stock market. We test the performance of this method with simulated data. Results show that Nn,r with appropriate parameters is more sensitive to the change of system and describes the trends of complex systems clearly. In addition, we research the stock closing price series from different data that consist of six indices: three US stock indices and three Chinese stock indices during different periods, Nn,r can quantify the changes of complexity for stock market data. Moreover, we get richer information from Nn,r, and obtain some properties about the differences between the US and Chinese stock indices.

  9. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  10. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  11. BRST cohomology ring in 2D gravity coupled to minimal models

    International Nuclear Information System (INIS)

    Kanno, H.; Sarmadi, M.H.

    1992-08-01

    The ring structure of Lian-Zuckerman states for (q,p) minimal models coupled to gravity is shown to be R=R 0 xC[w,w -1 ] where R 0 is the ring of ghost number zero operators generated by two elements and w is an operator of ghost number -1. Some examples are discussed in detail. For these models the currents are also discussed and their algebra is shown to contain the Virasoro algebra. (author). 21 refs

  12. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  13. Novel Approach for Lithium-Ion Battery On-Line Remaining Useful Life Prediction Based on Permutation Entropy

    Directory of Open Access Journals (Sweden)

    Luping Chen

    2018-04-01

    Full Text Available The degradation of lithium-ion battery often leads to electrical system failure. Battery remaining useful life (RUL prediction can effectively prevent this failure. Battery capacity is usually utilized as health indicator (HI for RUL prediction. However, battery capacity is often estimated on-line and it is difficult to be obtained by monitoring on-line parameters. Therefore, there is a great need to find a simple and on-line prediction method to solve this issue. In this paper, as a novel HI, permutation entropy (PE is extracted from the discharge voltage curve for analyzing battery degradation. Then the similarity between PE and battery capacity are judged by Pearson and Spearman correlation analyses. Experiment results illustrate the effectiveness and excellent similar performance of the novel HI for battery fading indication. Furthermore, we propose a hybrid approach combining Variational mode decomposition (VMD denoising technique, autoregressive integrated moving average (ARIMA, and GM(1,1 models for RUL prediction. Experiment results illustrate the accuracy of the proposed approach for lithium-ion battery on-line RUL prediction.

  14. Non-minimal inflation revisited

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Shafizadeh, Somayeh

    2010-01-01

    We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

  15. On the topology of the inflaton field in minimal supergravity models

    Science.gov (United States)

    Ferrara, Sergio; Fré, Pietro; Sorin, Alexander S.

    2014-04-01

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R + R 2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  16. On the Topology of the Inflaton Field in Minimal Supergravity Models

    CERN Document Server

    Ferrara, Sergio; Sorin, Alexander S

    2014-01-01

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R+R^2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the SU(1,1)/U(1) space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  17. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    International Nuclear Information System (INIS)

    Sardanyes, Josep; Sole, Ricard V.

    2008-01-01

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations

  18. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  19. On exotic supersymmetries of the φ1,3 deformation of minimal models

    International Nuclear Information System (INIS)

    Kadiri, A.; Saidi, E.H.; Zerouaoui, S.J.; Sedra, M.B.

    1994-07-01

    Using algebraic and field theoretical methods, we study the fractional spin symmetries of the φ 1,3 deformation of minimal models. The particular example of the D=2 three state tricritical Potts model is examined in detail. Various models based on subalgebras and appropriate discrete automorphism groups of the two dimensional fractional spin algebra are obtained. General features such as superspace and superfield representations, the U q (sl 2 ) symmetry, the spontaneous exotic supersymmetry breaking, relations with the N=2 Landau Ginzburg models as well as other things are discussed. (author). 24 refs

  20. Scattering matrices for Φ1,2 perturbed conformal minimal models in absence of kink states

    International Nuclear Information System (INIS)

    Koubek, A.; Martins, M.J.; Mussardo, G.

    1991-05-01

    We determine the spectrum and the factorizable S-matrices of the massive excitations of the nonunitary minimal models M 2,2n+1 perturbed by the operator Φ 1,2 . These models present no kinks as asymptotic states, as follows from the reduction of the Zhiber-Mikhailov-Shabat model with respect to the quantum group SL(2) q found by Smirnov. We also give the whole set of S-matrices of the nonunitary minimal model M 2,9 perturbed by the operator Φ 1,4 , which is related to a RSOS reduction for the Φ 1.2 operator of the unitary model M 8,9 . The thermodynamical Bethe ansatz and the truncated conformal space approach are applied to these scattering theories in order to support their interpretation. (orig.)

  1. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  2. Quantum tests for the linearity and permutation invariance of Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Hillery, Mark [Department of Physics, Hunter College of the City University of New York, 695 Park Avenue, New York, New York 10021 (United States); Andersson, Erika [SUPA, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom)

    2011-12-15

    The goal in function property testing is to determine whether a black-box Boolean function has a certain property or is {epsilon}-far from having that property. The performance of the algorithm is judged by how many calls need to be made to the black box in order to determine, with high probability, which of the two alternatives is the case. Here we present two quantum algorithms, the first to determine whether the function is linear and the second to determine whether it is symmetric (invariant under permutations of the arguments). Both require order {epsilon}{sup -2/3} calls to the oracle, which is better than known classical algorithms. In addition, in the case of linearity testing, if the function is linear, the quantum algorithm identifies which linear function it is. The linearity test combines the Bernstein-Vazirani algorithm and amplitude amplification, while the test to determine whether a function is symmetric uses projective measurements and amplitude amplification.

  3. A non-minimally coupled quintom dark energy model on the warped DGP brane

    International Nuclear Information System (INIS)

    Nozari, K; Azizi, T; Setare, M R; Behrouz, N

    2009-01-01

    We construct a quintom dark energy model with two non-minimally coupled scalar fields, one quintessence and the other phantom field, confined to the warped Dvali-Gabadadze-Porrati (DGP) brane. We show that this model accounts for crossing of the phantom divide line in appropriate subspaces of the model parameter space. This crossing occurs for both normal and self-accelerating branches of this DGP-inspired setup.

  4. Minimal representations of supersymmetry and 1D N-extended σ-models

    International Nuclear Information System (INIS)

    Toppan, Francesco

    2008-01-01

    We discuss the minimal representations of the 1D N-Extended Supersymmetry algebra (the Z 2 -graded symmetry algebra of the Supersymmetric Quantum Mechanics) linearly realized on a finite number of fields depending on a real parameter t, the time. Their knowledge allows to construct one-dimensional sigma-models with extended off-shell supersymmetries without using superfields (author)

  5. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  6. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    International Nuclear Information System (INIS)

    Hirata, So

    2003-01-01

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  7. A minimally-resolved immersed boundary model for reaction-diffusion problems

    OpenAIRE

    Pal Singh Bhalla, A; Griffith, BE; Patankar, NA; Donev, A

    2013-01-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blo...

  8. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  9. On the topology of the inflaton field in minimal supergravity models

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Physics Department, Theory Unit, CERN,CH 1211, Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044, Frascati (Italy); Department of Physics and Astronomy, University of California,Los Angeles, CA 90095-1547 (United States); Fré, Pietro [Dipartimento di Fisica, Università di Torino, INFN - Sezione di Torino,via P. Giuria 1, I-10125 Torino (Italy); Sorin, Alexander S. [Bogoliubov Laboratory of Theoretical Physics,and Veksler and Baldin Laboratory of High Energy Physics,Joint Institute for Nuclear Research,141980 Dubna, Moscow Region (Russian Federation)

    2014-04-14

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R+R{sup 2} supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the ((SU(1,1))/(U(1))) space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  10. Fock model and Segal-Bargmann transform for minimal representations of Hermitian Lie groups

    DEFF Research Database (Denmark)

    Hilgert, Joachim; Kobayashi, Toshiyuki; Möllers, Jan

    2012-01-01

    For any Hermitian Lie group G of tube type we construct a Fock model of its minimal representation. The Fock space is defined on the minimal nilpotent K_C-orbit X in p_C and the L^2-inner product involves a K-Bessel function as density. Here K is a maximal compact subgroup of G, and g......_C=k_C+p_C is a complexified Cartan decomposition. In this realization the space of k-finite vectors consists of holomorphic polynomials on X. The reproducing kernel of the Fock space is calculated explicitly in terms of an I-Bessel function. We further find an explicit formula of a generalized Segal-Bargmann transform which...... intertwines the Schroedinger and Fock model. Its kernel involves the same I-Bessel function. Using the Segal--Bargmann transform we also determine the integral kernel of the unitary inversion operator in the Schroedinger model which is given by a J-Bessel function....

  11. On radiative gauge symmetry breaking in the minimal supersymmetric model

    International Nuclear Information System (INIS)

    Gamberini, G.; Ridolfi, G.; Zwirner, F.

    1990-01-01

    We present a critical reappraisal of radiative gauge symmetry breaking in the minimal supersymmetric standard model. We show that a naive use of the renormalization group improved tree-level potential can lead to incorrect conclusions. We specify the conditions under which the above method gives reliable results, by performing a comparison with the results obtained from the full one-loop potential. We also point out how the stability constraint and the conditions for the absence of charge- and colour-breaking minima should be applied. Finally, we comment on the uncertainties affecting the model predictions for physical observables, in particular for the top quark mass. (orig.)

  12. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  13. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  14. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  15. An AdS3 dual for minimal model CFTs

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R.; Gopakumar, Rajesh

    2011-01-01

    We propose a duality between the 2d W N minimal models in the large N't Hooft limit, and a family of higher spin theories on AdS 3 . The 2d conformal field theories (CFTs) can be described as Wess-Zumino-Witten coset models, and include, for N=2, the usual Virasoro unitary series. The dual bulk theory contains, in addition to the massless higher spin fields, two complex scalars (of equal mass). The mass is directly related to the 't Hooft coupling constant of the dual CFT. We give convincing evidence that the spectra of the two theories match precisely for all values of the 't Hooft coupling. We also show that the renormalization group flows in the 2d CFT agree exactly with the usual AdS/CFT prediction of the gravity theory. Our proposal is in many ways analogous to the Klebanov-Polyakov conjecture for an AdS 4 dual for the singlet sector of large N vector models.

  16. A minimal model for multiple epidemics and immunity spreading.

    Science.gov (United States)

    Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan

    2010-10-18

    Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  17. Mixed-order phase transition in a minimal, diffusion-based spin model.

    Science.gov (United States)

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  18. On the representation matrices of the spin permutation group. [for atomic and molecular electronic structures

    Science.gov (United States)

    Wilson, S.

    1977-01-01

    A method is presented for the determination of the representation matrices of the spin permutation group (symmetric group), a detailed knowledge of these matrices being required in the study of the electronic structure of atoms and molecules. The method is characterized by the use of two different coupling schemes. Unlike the Yamanouchi spin algebraic scheme, the method is not recursive. The matrices for the fundamental transpositions can be written down directly in one of the two bases. The method results in a computationally significant reduction in the number of matrix elements that have to be stored when compared with, say, the standard Young tableaux group theoretical approach.

  19. Structural differences of matrix metalloproteinases. Homology modeling and energy minimization of enzyme-substrate complexes

    DEFF Research Database (Denmark)

    Terp, G E; Christensen, I T; Jørgensen, Flemming Steen

    2000-01-01

    Matrix metalloproteinases are extracellular enzymes taking part in the remodeling of extracellular matrix. The structures of the catalytic domain of MMP1, MMP3, MMP7 and MMP8 are known, but structures of enzymes belonging to this family still remain to be determined. A general approach...... to the homology modeling of matrix metalloproteinases, exemplified by the modeling of MMP2, MMP9, MMP12 and MMP14 is described. The models were refined using an energy minimization procedure developed for matrix metalloproteinases. This procedure includes incorporation of parameters for zinc and calcium ions...... in the AMBER 4.1 force field, applying a non-bonded approach and a full ion charge representation. Energy minimization of the apoenzymes yielded structures with distorted active sites, while reliable three-dimensional structures of the enzymes containing a substrate in active site were obtained. The structural...

  20. A hybrid approach for minimizing makespan in permutation flowshop scheduling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Balasundaram, R.; Baskar, N.

    2017-01-01

    This work proposes a hybrid approach for solving traditional flowshop scheduling problems to reduce the makespan (total completion time). To solve scheduling problems, a combination of Decision Tree (DT) and Scatter Search (SS) algorithms are used. Initially, the DT is used to generate a seed...... solution which is then given input to the SS to obtain optimal / near optimal solutions of makespan. The DT used the entropy function to convert the given problem into a tree structured format / set of rules. The SS provides an extensive investigation of the search space through diversification...

  1. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  2. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  3. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...... interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor...

  4. Index to Nuclear Safety. A technical progress review by chronology, permuted title, and author. Vol. 11, No. 1 through Vol. 15, No. 6

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Klein, A.

    1975-04-01

    This issue of the Index to Nuclear Safety covers only articles included in Nuclear Safety, Vol. 11, No. 1, through Vol. 15, No. 6. This index is presented in three sections as follows: Chronological List of Articles by Volume; Permuted Title (KWIC) Index; and Author Index. (U.S.)

  5. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  6. Topological gravity with minimal matter

    International Nuclear Information System (INIS)

    Li Keke

    1991-01-01

    Topological minimal matter, obtained by twisting the minimal N = 2 supeconformal field theory, is coupled to two-dimensional topological gravity. The free field formulation of the coupled system allows explicit representations of BRST charge, physical operators and their correlation functions. The contact terms of the physical operators may be evaluated by extending the argument used in a recent solution of topological gravity without matter. The consistency of the contact terms in correlation functions implies recursion relations which coincide with the Virasoro constraints derived from the multi-matrix models. Topological gravity with minimal matter thus provides the field theoretic description for the multi-matrix models of two-dimensional quantum gravity. (orig.)

  7. Transportation Mode Detection Based on Permutation Entropy and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2015-01-01

    Full Text Available With the increasing prevalence of GPS devices and mobile phones, transportation mode detection based on GPS data has been a hot topic in GPS trajectory data analysis. Transportation modes such as walking, driving, bus, and taxi denote an important characteristic of the mobile user. Longitude, latitude, speed, acceleration, and direction are usually used as features in transportation mode detection. In this paper, first, we explore the possibility of using Permutation Entropy (PE of speed, a measure of complexity and uncertainty of GPS trajectory segment, as a feature for transportation mode detection. Second, we employ Extreme Learning Machine (ELM to distinguish GPS trajectory segments of different transportation. Finally, to evaluate the performance of the proposed method, we make experiments on GeoLife dataset. Experiments results show that we can get more than 50% accuracy when only using PE as a feature to characterize trajectory sequence. PE can indeed be effectively used to detect transportation mode from GPS trajectory. The proposed method has much better accuracy and faster running time than the methods based on the other features and SVM classifier.

  8. Phenomenological study of the minimal R-symmetric supersymmetric standard model

    International Nuclear Information System (INIS)

    Diessner, Philip

    2016-01-01

    The Standard Model (SM) of particle physics gives a comprehensive description of numerous phenomena concerning the fundamental components of nature. Still, open questions and a clouded understanding of the underlying structure remain. Supersymmetry is a well motivated extension that may account for the observed density of dark matter in the universe and solve the hierarchy problem of the SM. The minimal supersymmetric extension of the SM (MSSM) provides solutions to these challenges. Furthermore, it predicts new particles in reach of current experiments. However, the model has its own theoretical challenges and is under fire from measurements provided by the Large Hadron Collider (LHC). Nevertheless, the concept of supersymmetry has an elegance which not only shines in the MSSM. Hence, it is also of interest to examine non-minimal supersymmetric models. They have benefits similar to the MSSM and may solve its shortcomings. R-symmetry is the only global symmetry allowed that does not commutate with supersymmetry and Lorentz symmetry. Thus, extending a supersymmetric model with R-symmetry is a theoretically well motivated endeavor to achieve the complete symmetry content of a field theory. Such a model provides a natural explanation for non-discovery in the early runs of the LHC and leads to further predictions distinct from those of the MSSM. The work described in this thesis contributes to the effort by studying the minimal R-symmetric supersymmetric extension of the SM (MRSSM). Important aspects of its physics and the dependence of observables on the parameter space of the MRSSM are investigated. The discovery of a scalar particle compatible with the Higgs boson of the SM at the LHC was announced in 2012. It is the first and crucial task of this thesis to understand the underlying mechanisms leading to the correct Higgs boson mass prediction in the MRSSM. Then, the relevant regions of parameter space are investigated and it is shown that they are also in agreement

  9. Phenomenological study of the minimal R-symmetric supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Diessner, Philip

    2016-10-20

    The Standard Model (SM) of particle physics gives a comprehensive description of numerous phenomena concerning the fundamental components of nature. Still, open questions and a clouded understanding of the underlying structure remain. Supersymmetry is a well motivated extension that may account for the observed density of dark matter in the universe and solve the hierarchy problem of the SM. The minimal supersymmetric extension of the SM (MSSM) provides solutions to these challenges. Furthermore, it predicts new particles in reach of current experiments. However, the model has its own theoretical challenges and is under fire from measurements provided by the Large Hadron Collider (LHC). Nevertheless, the concept of supersymmetry has an elegance which not only shines in the MSSM. Hence, it is also of interest to examine non-minimal supersymmetric models. They have benefits similar to the MSSM and may solve its shortcomings. R-symmetry is the only global symmetry allowed that does not commutate with supersymmetry and Lorentz symmetry. Thus, extending a supersymmetric model with R-symmetry is a theoretically well motivated endeavor to achieve the complete symmetry content of a field theory. Such a model provides a natural explanation for non-discovery in the early runs of the LHC and leads to further predictions distinct from those of the MSSM. The work described in this thesis contributes to the effort by studying the minimal R-symmetric supersymmetric extension of the SM (MRSSM). Important aspects of its physics and the dependence of observables on the parameter space of the MRSSM are investigated. The discovery of a scalar particle compatible with the Higgs boson of the SM at the LHC was announced in 2012. It is the first and crucial task of this thesis to understand the underlying mechanisms leading to the correct Higgs boson mass prediction in the MRSSM. Then, the relevant regions of parameter space are investigated and it is shown that they are also in agreement

  10. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  11. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  12. A minimal model for multiple epidemics and immunity spreading.

    Directory of Open Access Journals (Sweden)

    Kim Sneppen

    Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  13. The behaviour of random forest permutation-based variable importance measures under predictor correlation.

    Science.gov (United States)

    Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas

    2010-02-27

    Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.

  14. The Friedberg-Lee symmetry and minimal seesaw model

    International Nuclear Information System (INIS)

    He Xiaogang; Liao Wei

    2009-01-01

    The Friedberg-Lee (FL) symmetry is generated by a transformation of a fermionic field q to q+ξz. This symmetry puts very restrictive constraints on allowed terms in a Lagrangian. Applying this symmetry to N fermionic fields, we find that the number of independent fields is reduced to N-1 if the fields have gauge interaction or the transformation is a local one. Using this property, we find that a seesaw model originally with three generations of left- and right-handed neutrinos, with the left-handed neutrinos unaffected but the right-handed neutrinos transformed under the local FL translation, is reduced to an effective theory of minimal seesaw which has only two right-handed neutrinos. The symmetry predicts that one of the light neutrino masses must be zero.

  15. Viability of minimal left–right models with discrete symmetries

    Directory of Open Access Journals (Sweden)

    Wouter Dekens

    2014-12-01

    Full Text Available We provide a systematic study of minimal left–right models that are invariant under P, C, and/or CP transformations. Due to the high amount of symmetry such models are quite predictive in the amount and pattern of CP violation they can produce or accommodate at lower energies. Using current experimental constraints some of the models can already be excluded. For this purpose we provide an overview of the experimental constraints on the different left–right symmetric models, considering bounds from colliders, meson-mixing and low-energy observables, such as beta decay and electric dipole moments. The features of the various Yukawa and Higgs sectors are discussed in detail. In particular, we give the Higgs potentials for each case, discuss the possible vacua and investigate the amount of fine-tuning present in these potentials. It turns out that all left–right models with P, C, and/or CP symmetry have a high degree of fine-tuning, unless supplemented with mechanisms to suppress certain parameters. The models that are symmetric under both P and C are not in accordance with present observations, whereas the models with either P, C, or CP symmetry cannot be excluded by data yet. To further constrain and discriminate between the models measurements of B-meson observables at LHCb and B-factories will be especially important, while measurements of the EDMs of light nuclei in particular could provide complementary tests of the LRMs.

  16. A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach

    Directory of Open Access Journals (Sweden)

    Dai Owaki

    2017-06-01

    Full Text Available Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns.

  17. Minimal string theories and integrable hierarchies

    Science.gov (United States)

    Iyer, Ramakrishnan

    Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non

  18. Permutation entropy of finite-length white-noise time series.

    Science.gov (United States)

    Little, Douglas J; Kane, Deb M

    2016-08-01

    Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.

  19. Esscher transforms and the minimal entropy martingale measure for exponential Lévy models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Sgarra, C.

    In this paper we offer a systematic survey and comparison of the Esscher martingale transform for linear processes, the Esscher martingale transform for exponential processes, and the minimal entropy martingale measure for exponential lévy models and present some new results in order to give...

  20. Non-minimally coupled tachyon and inflation

    International Nuclear Information System (INIS)

    Piao Yunsong; Huang Qingguo; Zhang Xinmin; Zhang Yuanzhong

    2003-01-01

    In this Letter, we consider a model of tachyon with a non-minimal coupling to gravity and study its cosmological effects. Regarding inflation, we show that only for a specific coupling of tachyon to gravity this model satisfies observations and solves various problems which exist in the single and multi tachyon inflation models. But noting in the string theory the coupling coefficient of tachyon to gravity is of order g s , which in general is very small, we can hardly expect that the non-minimally coupling of tachyon to gravity could provide a reasonable tachyon inflation scenario. Our work may be a meaningful try for the cosmological effect of tachyon non-minimally coupled to gravity

  1. Precision electroweak tests of the minimal and flipped SU(5) supergravity models

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, J.L.; Nanopoulos, D.V.; Park, G.T.; Pois, H.; Yuan, K. (Center for Theoretical Physics, Department of Physics, Texas A M University, College Station, Texas 77843-4242 (United States) Astroparticle Physics Group, Houston Advanced Research Center (HARC), The Woodlands, Texas 77381 (United States))

    1993-10-01

    We explore the one-loop electroweak radiative corrections in the minimal SU(5) and the no-scale flipped SU(5) supergravity models via explicit calculation of vacuum polarization contributions to the [epsilon][sub 1,2,3] parameters. Experimentally, [epsilon][sub 1,2,3] are obtained from a global fit to the CERN LEP observables, and [ital M][sub [ital W

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  3. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  4. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  5. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  6. A predictive model of suitability for minimally invasive parathyroid surgery in the treatment of primary hyperparathyroidism [corrected].

    LENUS (Irish Health Repository)

    Kavanagh, Dara O

    2012-05-01

    Improved preoperative localizing studies have facilitated minimally invasive approaches in the treatment of primary hyperparathyroidism (PHPT). Success depends on the ability to reliably select patients who have PHPT due to single-gland disease. We propose a model encompassing preoperative clinical, biochemical, and imaging studies to predict a patient\\'s suitability for minimally invasive surgery.

  7. Multiple travelling-wave solutions in a minimal model for cell motility

    KAUST Repository

    Kimpton, L. S.

    2012-07-11

    Two-phase flow models have been used previously to model cell motility. In order to reduce the complexity inherent with describing the many physical processes, we formulate a minimal model. Here we demonstrate that even the simplest 1D, two-phase, poroviscous, reactive flow model displays various types of behaviour relevant to cell crawling. We present stability analyses that show that an asymmetric perturbation is required to cause a spatially uniform, stationary strip of cytoplasm to move, which is relevant to cell polarization. Our numerical simulations identify qualitatively distinct families of travellingwave solutions that coexist at certain parameter values. Within each family, the crawling speed of the strip has a bell-shaped dependence on the adhesion strength. The model captures the experimentally observed behaviour that cells crawl quickest at intermediate adhesion strengths, when the substrate is neither too sticky nor too slippy. © The Author 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  8. Flaxion: a minimal extension to solve puzzles in the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ema, Yohei [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU),University of Tokyo, Kashiwa 277-8583 (Japan)

    2017-01-23

    We propose a minimal extension of the standard model which includes only one additional complex scalar field, flavon, with flavor-dependent global U(1) symmetry. It not only explains the hierarchical flavor structure in the quark and lepton sector (including neutrino sector), but also solves the strong CP problem by identifying the CP-odd component of the flavon as the QCD axion, which we call flaxion. Furthermore, the flaxion model solves the cosmological puzzles in the standard model, i.e., origin of dark matter, baryon asymmetry of the universe, and inflation. We show that the radial component of the flavon can play the role of inflaton without isocurvature nor domain wall problems. The dark matter abundance can be explained by the flaxion coherent oscillation, while the baryon asymmetry of the universe is generated through leptogenesis.

  9. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  10. Sterile neutrino in a minimal three-generation see-saw model

    Indian Academy of Sciences (India)

    Sterile neutrino in a minimal three-generation see-saw model. Table 1. Relevant right-handed fermion and scalar fields and their transformation properties. Here we have defined Y. I3R· (B–L)/2. SU´2µL ¢U´1µI3R ¢U´1µB L. SU´2µL ¢UY ´1µ. Le ·Lµ Lτ. Seµ. 2R ν R. (1,1/2, 1). (1,0). 1. 1 ν·R. (1,1/2, 1). (1,0). 1. 1. ντR. (1, 1/2, 1).

  11. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    Science.gov (United States)

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  12. Neurophysiological model of tinnitus: dependence of the minimal masking level on treatment outcome.

    Science.gov (United States)

    Jastreboff, P J; Hazell, J W; Graham, R L

    1994-11-01

    Validity of the neurophysiological model of tinnitus (Jastreboff, 1990), outlined in this paper, was tested on data from multicenter trial of tinnitus masking (Hazell et al., 1985). Minimal masking level, intensity match of tinnitus, and the threshold of hearing have been evaluated on a total of 382 patients before and after 6 months of treatment with maskers, hearing aids, or combination devices. The data has been divided into categories depending on treatment outcome and type of approach used. Results of analysis revealed that: i) the psychoacoustical description of tinnitus does not possess a predictive value for the outcome of the treatment; ii) minimal masking level changed significantly depending on the treatment outcome, decreasing on average by 5.3 dB in patients reporting improvement, and increasing by 4.9 dB in those whose tinnitus remained the same or worsened; iii) 73.9% of patients reporting improvement had their minimal masking level decreased as compared with 50.5% for patients not showing improvement, which is at the level of random change; iv) the type of device used has no significant impact on the treatment outcome and minimal masking level change; v) intensity match and threshold of hearing did not exhibit any significant changes which can be related to treatment outcome. These results are fully consistent with the neurophysiological interpretation of mechanisms involved in the phenomenon of tinnitus and its alleviation.

  13. Flattening the inflaton potential beyond minimal gravity

    Directory of Open Access Journals (Sweden)

    Lee Hyun Min

    2018-01-01

    Full Text Available We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.

  14. One loop corrections to the lightest Higgs mass in the minimal η model with a heavy Z'

    International Nuclear Information System (INIS)

    Comelli, D.

    1992-06-01

    We have evaluated the one loop correction to the bound on the lightest Higgs mass valid in the minimal, E 6 based, supersymmetric η model in the presence of a 'heavy' Z', M z' ≥1 TeV. The dominant contribution from the fermion sfermion sector increases the 108 GeV tree level value by an amount that depends on the top mass in a way that is largely reminescent of minimal SUSY models. For M t ≤150 GeV, Msub(t tilde)=1 TeV, the 'light' Higgs mass is always ≤130 GeV. (orig.)

  15. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  16. Signals of non-minimal Higgs sectors at future colliders

    International Nuclear Information System (INIS)

    Akeroyd, A.G.

    1996-08-01

    This thesis concerns study of extended Higgs sectors at future colliders. Such studies are well motivated since enlarged Higgs models are a necessity in many extensions of the Standard Model (SM), although these structures may be considered purely in the context of the SM, to be called the 'non-minimal SM'. The continuous theme of the thesis is the task of distinguishing between the (many) theoretically sound non-minimal Higgs sectors at forthcoming colliders. If a Higgs boson is found it is imperative to know from which model it originates. In particular, the possible differences between the Higgs sectors of the Minimal Supersymmetric Standard Model (MSSM) and the non-minimal SM are highlighted. (author)

  17. Minimal Z' models: present bounds and early LHC reach

    International Nuclear Information System (INIS)

    Salvioni, Ennio; Zwirner, Fabio; Villadoro, Giovanni

    2009-01-01

    We consider 'minimal' Z' models, whose phenomenology is controlled by only three parameters beyond the Standard Model ones: the Z' mass and two effective coupling constants. They encompass many popular models motivated by grand unification, as well as many arising in other theoretical contexts. This parameterization takes also into account both mass and kinetic mixing effects, which we show to be sizable in some cases. After discussing the interplay between the bounds from electroweak precision tests and recent direct searches at the Tevatron, we extend our analysis to estimate the early LHC discovery potential. We consider a center-of-mass energy from 7 towards 10 TeV and an integrated luminosity from 50 to several hundred pb -1 , taking all existing bounds into account. We find that the LHC will start exploring virgin land in parameter space for M Z' around 700 GeV, with lower masses still excluded by the Tevatron and higher masses still excluded by electroweak precision tests. Increasing the energy up to 10 TeV, the LHC will start probing a wider range of Z' masses and couplings, although several hundred pb -1 will be needed to explore the regions of couplings favored by grand unification and to overcome the Tevatron bounds in the mass region around 250 GeV.

  18. Neutron electric dipole moment in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Inui, T.; Mimura, Y.; Sakai, N.; Sasaki, T.

    1995-01-01

    The neutron electric dipole moment (EDM) due to the single quark EDM and to the transition EDM is calculated in the minimal supersymmetric standard model. Assuming that the Cabibbo-Kobayashi-Maskawa matrix at the grand unification scale is the only source of CP violation, complex phases are induced in the parameters of soft supersymmetry breaking at low energies. The chargino one-loop diagram is found to give the dominant contribution of the order of 10 -27 similar 10 -29 e.cm for the quark EDM, assuming the light chargino mass and the universal scalar mass to be 50 GeV and 100 GeV, respectively. Therefore the neutron EDM in this class of model is difficult to measure experimentally. The gluino one-loop diagram also contributes due to the flavor changing gluino coupling. The transition EDM is found to give dominant contributions for certain parameter regions. (orig.)

  19. A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis

    KAUST Repository

    Kannan, Venkateshan

    2017-03-29

    Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving the immune system and CNS that generates the principal subtypes of the disease observed in patients. The model captures several key features of MS, especially those that distinguish the chronic progressive phase from that of the relapse-remitting. In addition, a rare subtype of the disease, progressive relapsing MS naturally emerges from the model. The model posits the existence of two key thresholds, one in the immune system and the other in the CNS, that separate dynamically distinct behavior of the model. Exploring the two-dimensional space of these thresholds, we obtain multiple phases of disease evolution and these shows greater variation than the clinical classification of MS, thus capturing the heterogeneity that is manifested in patients.

  20. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  1. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  2. Dark matter constraints in the minimal and nonminimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Stephan, A.

    1998-01-01

    We determine the allowed parameter space and the particle spectra of the minimal SUSY standard model (MSSM) and nonminimal SUSY standard model (NMSSM) imposing correct electroweak gauge symmetry breaking and recent experimental constraints. The parameters of the models are evolved with the SUSY renormalization group equations assuming universality at the grand unified scale. Applying the new unbounded from below constraints we can exclude the lightest SUSY particle singlinos and light scalar and pseudoscalar Higgs singlets of the NMSSM. This exclusion removes the experimental possibility to distinguish between the MSSM and NMSSM via the recently proposed search for an additional cascade produced in the decay of the B-ino into the LSP singlino. Furthermore, the effects of the dark matter condition for the MSSM and NMSSM are investigated and the differences concerning the parameter space, the SUSY particle, and Higgs sector are discussed. thinsp copyright 1998 The American Physical Society

  3. Phenomenology of minimal Z’ models: from the LHC to the GUT scale

    Directory of Open Access Journals (Sweden)

    Accomando Elena

    2016-01-01

    Full Text Available We consider a class of minimal abelian extensions of the Standard Model with an extra neutral gauge boson Z′ at the TeV scale. In these scenarios an extended scalar sector and heavy right-handed neutrinos are naturally envisaged. We present some of their striking signatures at the Large Hadron Collider, the most interesting arising from a Z′ decaying to heavy neutrino pairs as well as a heavy scalar decaying to two Standard Model Higgses. Using renormalisation group methods, we characterise the high energy behaviours of these extensions and exploit the constraints imposed by the embedding into a wider GUT scenario.

  4. Neutral current in reduced minimal 3-3-1 model

    International Nuclear Information System (INIS)

    Vu Thi Ngoc Huyen; Hoang Ngoc Long; Tran Thanh Lam; Vo Quoc Phong

    2014-01-01

    This work is devoted for gauge boson sector of the recently proposed model based on SU(3) C ⊗SU(3) L ⊗ U(1) X group with minimal content of leptons and Higgs. The limits on the masses of the bilepton gauge bosons and on the mixing angle among the neutral ones are deduced. Using the Fritzsch anzats on quark mixing, we show that the third family of quarks should be different from the first two. We obtain a lower bound on mass of the new heavy neutral gauge boson as 4.032 TeV. Using data on branching decay rates of the Z boson, we can fix the limit to the Z and Z' mixing angle φ as - 0.001 ≤ φ ≤ 0.0003. (author)

  5. Neutrino CP violation and sign of baryon asymmetry in the minimal seesaw model

    Science.gov (United States)

    Shimizu, Yusuke; Takagi, Kenta; Tanimoto, Morimitsu

    2018-03-01

    We discuss the correlation between the CP violating Dirac phase of the lepton mixing matrix and the cosmological baryon asymmetry based on the leptogenesis in the minimal seesaw model with two right-handed Majorana neutrinos and the trimaximal mixing for neutrino flavors. The sign of the CP violating Dirac phase at low energy is fixed by the observed cosmological baryon asymmetry since there is only one phase parameter in the model. According to the recent T2K and NOνA data of the CP violation, the Dirac neutrino mass matrix of our model is fixed only for the normal hierarchy of neutrino masses.

  6. Predictions for m{sub t} and M{sub W} in minimal supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Cavanaugh, R. [Fermi National Accelerator Lab., Batavia, IL (United States); Illinois Univ., Chicago, IL (United States). Dept. of Physics; Roeck, A. de [European Lab. for Particle Physics (CERN), Geneva (Switzerland); Universitaire Instelling Antwerpen, Wilrijk (Belgium); Ellis, J.R. [European Lab. for Particle Physics (CERN), Geneva (Switzerland); Flaecher, H. [Rochester Univ., NY (United States). Dept. of Physics and Astronomy; Heinemeyer, S. [Instituto de Fisica de Cantabria, Santander (Spain); Isidori, G. [INFN, Laboratori Nazionali di Frascati (Italy); Technische Univ. Muenchen (Germany). Inst. for Advanced Study; Olive, K.A. [Minnesota Univ., Minnesota, MN (United States). William I. Fine Theoretical Physics Institute; Ronga, F.J. [ETH Zuerich (Switzerland). Institute for Particle Physics; Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2009-12-15

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m{sub t}, and the W boson mass, m{sub W}. We find that the supersymmetric predictions for both m{sub t} and m{sub W}, obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  7. Molecular symmetry: Why permutation-inversion (PI) groups don't render the point groups obsolete

    Science.gov (United States)

    Groner, Peter

    2018-01-01

    The analysis of spectra of molecules with internal large-amplitude motions (LAMs) requires molecular symmetry (MS) groups that are larger than and significantly different from the more familiar point groups. MS groups are described often by the permutation-inversion (PI) group method. It is shown that point groups still can and should play a significant role together with the PI groups for a class of molecules with internal rotors. In molecules of this class, several simple internal rotors are attached to a rigid molecular frame. The PI groups for this class are semidirect products like H ^ F, where the invariant subgroup H is a direct product of cyclic groups and F is a point group. This result is used to derive meaningful labels for MS groups, and to derive correlation tables between MS groups and point groups. MS groups of this class have many parallels to space groups of crystalline solids.

  8. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  9. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  10. USE OF EXCEL WORKSHEETS WITH USER-FRIENDLY INTERFACE IN BATCH PROCESS (PSBP TO MINIMIZE THE MAKESPAN

    Directory of Open Access Journals (Sweden)

    Rony Peterson da Rocha

    2014-01-01

    Full Text Available In the chemical industry, the necessity for scheduling is becoming more pronounced, especially in batch production mode. Nowadays, planning industrial activities is a necessity for survival. Intense competition requires diversified products and delivery in accordance with the requirements of consumers. These activities require quick decision making and the lowest possible cost, through an efficient Production Scheduling. So, this work addresses the Permutation Flow Shop scheduling problem, characterized as Production Scheduling in Batch Process (PSBP, with the objective of minimizing the total time to complete the schedule (Makespan. A method to approach the problem of production scheduling is to turn it into Mixed Integer Linear Programming- MILP, and to solve it using commercial mathematical programming packages. In this study an electronic spreadsheet with user-friendly interface (ESUFI was developed in Microsoft Excel. The ease of manipulation of the ESUFI is quite evident, as with the use of VBA language a user-friendly interface could be created between the user and the spreadsheet itself. The results showed that it is possible to use the ESUFI for small problems.

  11. Inverse modelling and pulsating torque minimization of salient pole non-sinusoidal synchronous machines

    Energy Technology Data Exchange (ETDEWEB)

    Ait-gougam, Y.; Ibtiouen, R.; Touhami, O. [Laboratoire de Recherche en Electrotechnique, Ecole Nationale Polytechnique, BP 182, El-Harrach 16200 (Algeria); Louis, J.-P.; Gabsi, M. [Systemes et Applications des Technologies de l' Information et de l' Energie (SATIE), CNRS UMR 8029, Ecole Normale Superieure de Cachan, 61 Avenue du President Wilson, 94235 Cachan Cedex (France)

    2008-01-15

    Sinusoidal motor's mathematical models are usually obtained using classical d-q transformation in the case of salient pole synchronous motors having sinusoidal field distribution. In this paper, a new inverse modelling for synchronous motors is presented. This modelling is derived from the properties of constant torque curves in the Concordia's reference frame. It takes into account the non-sinusoidal field distribution; EMF, self and mutual inductances having non-sinusoidal variations with respect to the angular rotor position. Both copper losses and torque ripples are minimized by adapted currents waveforms calculated from this model. Experimental evaluation was carried out on a DSP-controlled PMSM drive platform. Test results obtained demonstrate the effectiveness of the proposed method in reducing torque ripple. (author)

  12. Lower Bounds in the Asymmetric External Memory Model

    DEFF Research Database (Denmark)

    Jacob, Riko; Sitchinava, Nodari

    2017-01-01

    Motivated by the asymmetric read and write costs of emerging non-volatile memory technologies, we study lower bounds for the problems of sorting, permuting and multiplying a sparse matrix by a dense vector in the asymmetric external memory model (AEM). Given an AEM with internal (symmetric) memory...... of size M, transfers between symmetric and asymmetric memory in blocks of size B and the ratio ω between write and read costs, we show Ω(min (N, ωN/B logω M/B N/B) lower bound for the cost of permuting N input elements. This lower bound also applies to the problem of sorting N elements. This proves...

  13. Comparing vector-based and Bayesian memory models using large-scale datasets: User-generated hashtag and tag prediction on Twitter and Stack Overflow.

    Science.gov (United States)

    Stanley, Clayton; Byrne, Michael D

    2016-12-01

    The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Atomistic minimal model for estimating profile of electrodeposited nanopatterns

    Science.gov (United States)

    Asgharpour Hassankiadeh, Somayeh; Sadeghi, Ali

    2018-06-01

    We develop a computationally efficient and methodologically simple approach to realize molecular dynamics simulations of electrodeposition. Our minimal model takes into account the nontrivial electric field due a sharp electrode tip to perform simulations of the controllable coating of a thin layer on a surface with an atomic precision. On the atomic scale a highly site-selective electrodeposition of ions and charged particles by means of the sharp tip of a scanning probe microscope is possible. A better understanding of the microscopic process, obtained mainly from atomistic simulations, helps us to enhance the quality of this nanopatterning technique and to make it applicable in fabrication of nanowires and nanocontacts. In the limit of screened inter-particle interactions, it is feasible to run very fast simulations of the electrodeposition process within the framework of the proposed model and thus to investigate how the shape of the overlayer depends on the tip-sample geometry and dielectric properties, electrolyte viscosity, etc. Our calculation results reveal that the sharpness of the profile of a nano-scale deposited overlayer is dictated by the normal-to-sample surface component of the electric field underneath the tip.

  15. Minimization of energy consumption in HVAC systems with data-driven models and an interior-point method

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Xu, Guanglin; Zhang, Zijun

    2014-01-01

    Highlights: • We study the energy saving of HVAC systems with a data-driven approach. • We conduct an in-depth analysis of the topology of developed Neural Network based HVAC model. • We apply interior-point method to solving a Neural Network based HVAC optimization model. • The uncertain building occupancy is incorporated in the minimization of HVAC energy consumption. • A significant potential of saving HVAC energy is discovered. - Abstract: In this paper, a data-driven approach is applied to minimize energy consumption of a heating, ventilating, and air conditioning (HVAC) system while maintaining the thermal comfort of a building with uncertain occupancy level. The uncertainty of arrival and departure rate of occupants is modeled by the Poisson and uniform distributions, respectively. The internal heating gain is calculated from the stochastic process of the building occupancy. Based on the observed and simulated data, a multilayer perceptron algorithm is employed to model and simulate the HVAC system. The data-driven models accurately predict future performance of the HVAC system based on the control settings and the observed historical information. An optimization model is formulated and solved with the interior-point method. The optimization results are compared with the results produced by the simulation models

  16. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author. Vol. 11(1)--Vol. 18(6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W.B.; Klein, A.

    1978-04-11

    This index to Nuclear Safety covers articles published in Nuclear Safety, Vol. 11, No. 1 (January-February 1970), through Vol. 18, No. 6 (November-December 1977). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 450 technical articles published in Nuclear Safety in the last eight years are listed in this index.

  17. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author. Vol. 11(1)--Vol. 18(6)

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Klein, A.

    1978-01-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Vol. 11, No. 1 (January-February 1970), through Vol. 18, No. 6 (November-December 1977). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 450 technical articles published in Nuclear Safety in the last eight years are listed in this index

  18. Horizontal, anomalous U(1) symmetry for the more minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Nelson, A.E.; Wright, D.

    1997-01-01

    We construct explicit examples with a horizontal, open-quotes anomalousclose quotes U(1) gauge group, which, in a supersymmetric extension of the standard model, reproduce qualitative features of the fermion spectrum and CKM matrix, and suppress FCNC and proton decay rates without the imposition of global symmetries. We review the motivation for such open-quotes moreclose quotes minimal supersymmetric standard models and their predictions for the sparticle spectrum. There is a mass hierarchy in the scalar sector which is the inverse of the fermion mass hierarchy. We show in detail why ΔS=2 FCNCs are greatly suppressed when compared with naive estimates for nondegenerate squarks. copyright 1997 The American Physical Society

  19. Minimal string theory is logarithmic

    International Nuclear Information System (INIS)

    Ishimoto, Yukitaka; Yamaguchi, Shun-ichi

    2005-01-01

    We study the simplest examples of minimal string theory whose worldsheet description is the unitary (p,q) minimal model coupled to two-dimensional gravity ( Liouville field theory). In the Liouville sector, we show that four-point correlation functions of 'tachyons' exhibit logarithmic singularities, and that the theory turns out to be logarithmic. The relation with Zamolodchikov's logarithmic degenerate fields is also discussed. Our result holds for generic values of (p,q)

  20. Non-minimally coupled quintessence dark energy model with a cubic galileon term: a dynamical system analysis

    Science.gov (United States)

    Bhattacharya, Somnath; Mukherjee, Pradip; Roy, Amit Singha; Saha, Anirban

    2018-03-01

    We consider a scalar field which is generally non-minimally coupled to gravity and has a characteristic cubic Galilean-like term and a generic self-interaction, as a candidate of a Dark Energy model. The system is dynamically analyzed and novel fixed points with perturbative stability are demonstrated. Evolution of the system is numerically studied near a novel fixed point which owes its existence to the Galileon character of the model. It turns out that demanding the stability of this novel fixed point puts a strong restriction on the allowed non-minimal coupling and the choice of the self-interaction. The evolution of the equation of state parameter is studied, which shows that our model predicts an accelerated universe throughout and the phantom limit is only approached closely but never crossed. Our result thus extends the findings of Coley, Dynamical systems and cosmology. Kluwer Academic Publishers, Boston (2013) for more general NMC than linear and quadratic couplings.

  1. An approach to gauge hierarchy in the minimal SU(5) model of grand unification

    International Nuclear Information System (INIS)

    Ghose, P.

    1982-08-01

    It is shown that if all mass generation through spontaneous symmetry breaking is predominantly caused by scalar loops in the minimal SU(5) model of grand unification, it is possible to have an arbitrarily large gauge hierarchy msub(x) >> msub(w) with all Higgs bosons superheavy. No fine tuning is necessary in every order. (author)

  2. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  3. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    Science.gov (United States)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  4. Systems biology perspectives on minimal and simpler cells.

    Science.gov (United States)

    Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel

    2014-09-01

    The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  5. Systems Biology Perspectives on Minimal and Simpler Cells

    Science.gov (United States)

    Xavier, Joana C.; Patil, Kiran Raosaheb

    2014-01-01

    SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563

  6. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  7. Stable-label intravenous glucose tolerance test minimal model

    International Nuclear Information System (INIS)

    Avogaro, A.; Bristow, J.D.; Bier, D.M.; Cobelli, C.; Toffolo, G.

    1989-01-01

    The minimal model approach to estimating insulin sensitivity (Sl) and glucose effectiveness in promoting its own disposition at basal insulin (SG) is a powerful tool that has been underutilized given its potential applications. In part, this has been due to its inability to separate insulin and glucose effects on peripheral uptake from their effects on hepatic glucose inflow. Prior enhancements, with radiotracer labeling of the dosage, permit this separation but are unsuitable for use in pregnancy and childhood. In this study, we labeled the intravenous glucose tolerance test (IVGTT) dosage with [6,6- 2 H 2 ]glucose, [2- 2 H]glucose, or both stable isotopically labeled glucose tracers and modeled glucose kinetics in six postabsorptive, nonobese adults. As previously found with the radiotracer model, the tracer-estimated S*l derived from the stable-label IVGTT was greater than Sl in each case except one, and the tracer-estimated SG* was less than SG in each instance. More importantly, however, the stable-label IVGTT estimated each parameter with an average precision of +/- 5% (range 3-9%) compared to average precisions of +/- 74% (range 7-309%) for SG and +/- 22% (range 3-72%) for Sl. In addition, because of the different metabolic fates of the two deuterated tracers, there were minor differences in basal insulin-derived measures of glucose effectiveness, but these differences were negligible for parameters describing insulin-stimulated processes. In conclusion, the stable-label IVGTT is a simple, highly precise means of assessing insulin sensitivity and glucose effectiveness at basal insulin that can be used to measure these parameters in individuals of all ages, including children and pregnant women

  8. Minimal models on Riemann surfaces: The partition functions

    International Nuclear Information System (INIS)

    Foda, O.

    1990-01-01

    The Coulomb gas representation of the A n series of c=1-6/[m(m+1)], m≥3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius) 2 of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.)

  9. Minimal models on Riemann surfaces: The partition functions

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O. (Katholieke Univ. Nijmegen (Netherlands). Inst. voor Theoretische Fysica)

    1990-06-04

    The Coulomb gas representation of the A{sub n} series of c=1-6/(m(m+1)), m{ge}3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius){sup 2} of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.).

  10. Minimizing tardiness for job shop scheduling under uncertainties

    OpenAIRE

    Yahouni , Zakaria; Mebarki , Nasser; Sari , Zaki

    2016-01-01

    International audience; —Many disturbances can occur during the execution of a manufacturing scheduling process. To cope with this drawback , flexible solutions are proposed based on the offline and the online phase of the schedule. Groups of permutable operations is one of the most studied flexible scheduling methods bringing flexibility as well as quality to a schedule. The online phase of this method is based on a human-machine system allowing to choose in real-time one schedule from a set...

  11. Tunneling and Speedup in Quantum Optimization for Permutation-Symmetric Problems

    Directory of Open Access Journals (Sweden)

    Siddharth Muthukrishnan

    2016-07-01

    Full Text Available Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA, especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final cost function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Finally, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.

  12. Review of Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    S. Fukano, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  13. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  14. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  15. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Directory of Open Access Journals (Sweden)

    Takashi Shinzato

    Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  16. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  17. Spreading Speed, Traveling Waves, and Minimal Domain Size in Impulsive Reaction–Diffusion Models

    KAUST Repository

    Lewis, Mark A.

    2012-08-15

    How growth, mortality, and dispersal in a species affect the species\\' spread and persistence constitutes a central problem in spatial ecology. We propose impulsive reaction-diffusion equation models for species with distinct reproductive and dispersal stages. These models can describe a seasonal birth pulse plus nonlinear mortality and dispersal throughout the year. Alternatively, they can describe seasonal harvesting, plus nonlinear birth and mortality as well as dispersal throughout the year. The population dynamics in the seasonal pulse is described by a discrete map that gives the density of the population at the end of a pulse as a possibly nonmonotone function of the density of the population at the beginning of the pulse. The dynamics in the dispersal stage is governed by a nonlinear reaction-diffusion equation in a bounded or unbounded domain. We develop a spatially explicit theoretical framework that links species vital rates (mortality or fecundity) and dispersal characteristics with species\\' spreading speeds, traveling wave speeds, as well as minimal domain size for species persistence. We provide an explicit formula for the spreading speed in terms of model parameters, and show that the spreading speed can be characterized as the slowest speed of a class of traveling wave solutions. We also give an explicit formula for the minimal domain size using model parameters. Our results show how the diffusion coefficient, and the combination of discrete- and continuous-time growth and mortality determine the spread and persistence dynamics of the population in a wide variety of ecological scenarios. Numerical simulations are presented to demonstrate the theoretical results. © 2012 Society for Mathematical Biology.

  18. Probing the non-minimal Higgs sector at the SSC

    International Nuclear Information System (INIS)

    Gunion, J.F.; Haber, H.E.; Komamiya, S.; Yamamoto, H.; Barbaro-Galtieri, A.

    1987-11-01

    Non-minimal Higgs sectors occur in the Standard Model with more than one Higgs doublet, as well as in theories that go beyond the Standard Model. In this report, we discuss how Higgs search strategies must be altered, with respect to the Standard Model approaches, in order to probe the non-minimal Higgs sectors at the SSC

  19. The Multimorbidity Cluster Analysis Tool: Identifying Combinations and Permutations of Multiple Chronic Diseases Using a Record-Level Computational Analysis

    Directory of Open Access Journals (Sweden)

    Kathryn Nicholson

    2017-12-01

    Full Text Available Introduction: Multimorbidity, or the co-occurrence of multiple chronic health conditions within an individual, is an increasingly dominant presence and burden in modern health care systems.  To fully capture its complexity, further research is needed to uncover the patterns and consequences of these co-occurring health states.  As such, the Multimorbidity Cluster Analysis Tool and the accompanying Multimorbidity Cluster Analysis Toolkit have been created to allow researchers to identify distinct clusters that exist within a sample of participants or patients living with multimorbidity.  Development: The Tool and Toolkit were developed at Western University in London, Ontario, Canada.  This open-access computational program (JAVA code and executable file was developed and tested to support an analysis of thousands of individual records and up to 100 disease diagnoses or categories.  Application: The computational program can be adapted to the methodological elements of a research project, including type of data, type of chronic disease reporting, measurement of multimorbidity, sample size and research setting.  The computational program will identify all existing, and mutually exclusive, combinations and permutations within the dataset.  An application of this computational program is provided as an example, in which more than 75,000 individual records and 20 chronic disease categories resulted in the detection of 10,411 unique combinations and 24,647 unique permutations among female and male patients.  Discussion: The Tool and Toolkit are now available for use by researchers interested in exploring the complexities of multimorbidity.  Its careful use, and the comparison between results, will be valuable additions to the nuanced understanding of multimorbidity.

  20. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  1. Adiabatic density perturbations and matter generation from the minimal supersymmetric standard model.

    Science.gov (United States)

    Enqvist, Kari; Kasuya, Shinta; Mazumdar, Anupam

    2003-03-07

    We propose that the inflaton is coupled to ordinary matter only gravitationally and that it decays into a completely hidden sector. In this scenario both baryonic and dark matter originate from the decay of a flat direction of the minimal supersymmetric standard model, which is shown to generate the desired adiabatic perturbation spectrum via the curvaton mechanism. The requirement that the energy density along the flat direction dominates over the inflaton decay products fixes the flat direction almost uniquely. The present residual energy density in the hidden sector is typically shown to be small.

  2. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  3. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  4. Symmetry analysis of many-body wave functions, with applications to the nuclear shell model

    International Nuclear Information System (INIS)

    Novoselsky, A.; Katriel, J.

    1995-01-01

    The weights of the different permutational symmetry components of a nonsymmetry-adapted many-particle wave function are evaluated in terms of the expectation values of the symmetric-group class sums. This facilitates the evaluation of the weights without the construction of a complete set of symmetry adapted functions. Subspace projection operators are introduced, to be used when prior knowledge about the symmetry-species composition of a wave function is available. The permutational weight analysis of a recursively angular-momentum coupled (shell model) wave function is presented as an illustration

  5. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    Science.gov (United States)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  6. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author, Volume 18 (1) through Volume 22 (6)

    International Nuclear Information System (INIS)

    Cottrell, W.B.; Passiakos, M.

    1982-06-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Volume 18, Number 1 (January-February 1977) through Volume 22, Number 6 (November-December 1981). The index is divided into three section: a chronological list of articles (including abstracts), a permuted-title (KWIC) index, and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center, covers all safety aspects of nuclear power reactors and associated facilities. Over 300 technical articles published in Nuclear Safety in the last 5 years are listed in this index

  7. Index to Nuclear Safety: a technical progress review by chronology, permuted title, and author, Volume 18 (1) through Volume 22 (6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W.B.; Passiakos, M.

    1982-06-01

    This index to Nuclear Safety covers articles published in Nuclear Safety, Volume 18, Number 1 (January-February 1977) through Volume 22, Number 6 (November-December 1981). The index is divided into three section: a chronological list of articles (including abstracts), a permuted-title (KWIC) index, and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center, covers all safety aspects of nuclear power reactors and associated facilities. Over 300 technical articles published in Nuclear Safety in the last 5 years are listed in this index.

  8. Development of a minimal growth medium for Lactobacillus plantarum

    NARCIS (Netherlands)

    Wegkamp, H.B.A.; Teusink, B.; Vos, de W.M.; Smid, E.J.

    2010-01-01

    Aim: A medium with minimal requirements for the growth of Lactobacillus plantarum WCFS was developed. The composition of the minimal medium was compared to a genome-scale metabolic model of L. plantarum. Methods and Results: By repetitive single omission experiments, two minimal media were

  9. Support minimized inversion of acoustic and elastic wave scattering

    International Nuclear Information System (INIS)

    Safaeinili, A.

    1994-01-01

    This report discusses the following topics on support minimized inversion of acoustic and elastic wave scattering: Minimum support inversion; forward modelling of elastodynamic wave scattering; minimum support linearized acoustic inversion; support minimized nonlinear acoustic inversion without absolute phase; and support minimized nonlinear elastic inversion

  10. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  11. The Quest for Minimal Quotients for Probabilistic Automata

    DEFF Research Database (Denmark)

    Eisentraut, Christian; Hermanns, Holger; Schuster, Johann

    2013-01-01

    One of the prevailing ideas in applied concurrency theory and verification is the concept of automata minimization with respect to strong or weak bisimilarity. The minimal automata can be seen as canonical representations of the behaviour modulo the bisimilarity considered. Together with congruence...... results wrt. process algebraic operators, this can be exploited to alleviate the notorious state space explosion problem. In this paper, we aim at identifying minimal automata and canonical representations for concurrent probabilistic models. We present minimality and canonicity results for probabilistic...... automata wrt. strong and weak bisimilarity, together with polynomial time minimization algorithms....

  12. Supersymmetric hybrid inflation with non-minimal Kahler potential

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; King, S.F.; Shafi, Q.

    2007-01-01

    Minimal supersymmetric hybrid inflation based on a minimal Kahler potential predicts a spectral index n s ∼>0.98. On the other hand, WMAP three year data prefers a central value n s ∼0.95. We propose a class of supersymmetric hybrid inflation models based on the same minimal superpotential but with a non-minimal Kahler potential. Including radiative corrections using the one-loop effective potential, we show that the prediction for the spectral index is sensitive to the small non-minimal corrections, and can lead to a significantly red-tilted spectrum, in agreement with WMAP

  13. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  14. Minimal Dark Matter in the sky

    International Nuclear Information System (INIS)

    Panci, P.

    2016-01-01

    We discuss some theoretical and phenomenological aspects of the Minimal Dark Matter (MDM) model proposed in 2006, which is a theoretical framework highly appreciated for its minimality and yet its predictivity. We first critically review the theoretical requirements of MDM pointing out generalizations of this framework. Then we review the phenomenology of the originally proposed fermionic hyperchargeless electroweak quintuplet showing its main γ-ray tests.

  15. A new minimal-stress freely-moving rat model for preclinical studies on intranasal administration of CNS drugs.

    Science.gov (United States)

    Stevens, Jasper; Suidgeest, Ernst; van der Graaf, Piet Hein; Danhof, Meindert; de Lange, Elizabeth C M

    2009-08-01

    To develop a new minimal-stress model for intranasal administration in freely moving rats and to evaluate in this model the brain distribution of acetaminophen following intranasal versus intravenous administration. Male Wistar rats received one intranasal cannula, an intra-cerebral microdialysis probe, and two blood cannulas for drug administration and serial blood sampling respectively. To evaluate this novel model, the following experiments were conducted. 1) Evans Blue was administered to verify the selectivity of intranasal exposure. 2) During a 1 min infusion 10, 20, or 40 microl saline was administered intranasally or 250 microl intravenously. Corticosterone plasma concentrations over time were compared as biomarkers for stress. 3) 200 microg of the model drug acetaminophen was given in identical setup and plasma, and brain pharmacokinetics were determined. In 96% of the rats, only the targeted nasal cavity was deeply colored. Corticosterone plasma concentrations were not influenced, neither by route nor volume of administration. Pharmacokinetics of acetaminophen were identical after intravenous and intranasal administration, although the Cmax in microdialysates was reached a little earlier following intravenous administration. A new minimal-stress model for intranasal administration in freely moving rats has been successfully developed and allows direct comparison with intravenous administration.

  16. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  17. On the complete classification of unitary N=2 minimal superconformal field theories

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Oliver

    2009-08-03

    Aiming at a complete classification of unitary N=2 minimal models (where the assumption of space-time supersymmetry has been dropped), it is shown that each candidate for a modular invariant partition function of such a theory is indeed the partition function of a minimal model. A family of models constructed via orbifoldings of either the diagonal model or of the space-time supersymmetric exceptional models demonstrates that there exists a unitary N=2 minimal model for every one of the allowed partition functions in the list obtained from Gannon's work. Kreuzer and Schellekens' conjecture that all simple current invariants can be obtained as orbifolds of the diagonal model, even when the extra assumption of higher-genus modular invariance is dropped, is confirmed in the case of the unitary N=2 minimal models by simple counting arguments. We nd a nice characterisation of the projection from the Hilbert space of a minimal model with k odd to its modular invariant subspace, and we present a new simple proof of the superconformal version of the Verlinde formula for the minimal models using simple currents. Finally we demonstrate a curious relation between the generating function of simple current invariants and the Riemann zeta function. (orig.)

  18. On the complete classification of unitary N=2 minimal superconformal field theories

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Oliver

    2009-08-03

    Aiming at a complete classification of unitary N=2 minimal models (where the assumption of space-time supersymmetry has been dropped), it is shown that each candidate for a modular invariant partition function of such a theory is indeed the partition function of a minimal model. A family of models constructed via orbifoldings of either the diagonal model or of the space-time supersymmetric exceptional models demonstrates that there exists a unitary N=2 minimal model for every one of the allowed partition functions in the list obtained from Gannon's work. Kreuzer and Schellekens' conjecture that all simple current invariants can be obtained as orbifolds of the diagonal model, even when the extra assumption of higher-genus modular invariance is dropped, is confirmed in the case of the unitary N=2 minimal models by simple counting arguments. We nd a nice characterisation of the projection from the Hilbert space of a minimal model with k odd to its modular invariant subspace, and we present a new simple proof of the superconformal version of the Verlinde formula for the minimal models using simple currents. Finally we demonstrate a curious relation between the generating function of simple current invariants and the Riemann zeta function. (orig.)

  19. On the complete classification of unitary N=2 minimal superconformal field theories

    International Nuclear Information System (INIS)

    Gray, Oliver

    2009-01-01

    Aiming at a complete classi cation of unitary N=2 minimal models (where the assumption of space-time supersymmetry has been dropped), it is shown that each candidate for a modular invariant partition function of such a theory is indeed the partition function of a minimal model. A family of models constructed via orbifoldings of either the diagonal model or of the space-time supersymmetric exceptional models demonstrates that there exists a unitary N=2 minimal model for every one of the allowed partition functions in the list obtained from Gannon's work. Kreuzer and Schellekens' conjecture that all simple current invariants can be obtained as orbifolds of the diagonal model, even when the extra assumption of higher-genus modular invariance is dropped, is confirmed in the case of the unitary N=2 minimal models by simple counting arguments. We nd a nice characterisation of the projection from the Hilbert space of a minimal model with k odd to its modular invariant subspace, and we present a new simple proof of the superconformal version of the Verlinde formula for the minimal models using simple currents. Finally we demonstrate a curious relation between the generating function of simple current invariants and the Riemann zeta function. (orig.)

  20. Designing a model to minimize inequities in hemodialysis facilities distribution

    Directory of Open Access Journals (Sweden)

    Teresa M. Salgado

    2011-11-01

    Full Text Available Portugal has an uneven, city-centered bias in the distribution of hemodialysis centers found to contribute to health care inequities. A model has been developed with the aim of minimizing access inequity through the identification of the best possible localization of new hemodialysis facilities. The model was designed under the assumption that individuals from different geographic areas, ceteris paribus, present the same likelihood of requiring hemodialysis in the future. Distances to reach the closest hemodialysis facility were calculated for every municipality lacking one. Regions were scored by aggregating weights of the “individual burden”, defined as the burden for an individual living in a region lacking a hemodialysis center to reach one as often as needed, and the “population burden”, defined as the burden for the total population living in such a region. The model revealed that the average travelling distance for inhabitants in municipalities without a hemodialysis center is 32 km and that 145,551 inhabitants (1.5% live more than 60 min away from a hemodialysis center, while 1,393,770 (13.8% live 30-60 min away. Multivariate analysis showed that the current localization of hemodialysis facilities is associated with major urban areas. The model developed recommends 12 locations for establishing hemodialysis centers that would result in drastically reduced travel for 34 other municipalities, leaving only six (34,800 people with over 60 min of travel. The application of this model should facilitate the planning of future hemodialysis services as it takes into consideration the potential impact of travel time for individuals in need of dialysis, as well as the logistic arrangements required to transport all patients with end-stage renal disease. The model is applicable in any country and health care planners can opt to weigh these two elements differently in the model according to their priorities.

  1. Physics on smallest scales. An introduction to minimal length phenomenology

    International Nuclear Information System (INIS)

    Sprenger, Martin; Goethe Univ., Frankfurt am Main; Nicolini, Piero; Bleicher, Marcus

    2012-02-01

    Many modern theories which try to unite gravity with the Standard Model of particle physics, as e.g. string theory, propose two key modifications to the commonly known physical theories: - the existence of additional space dimensions - the existence of a minimal length distance or maximal resolution. While extra dimensions have received a wide coverage in publications over the last ten years (especially due to the prediction of micro black hole production at the LHC), the phenomenology of models with a minimal length is still less investigated. In a summer study project for bachelor students in 2010 we have explored some phenomenological implications of the potential existence of a minimal length. In this paper we review the idea and formalism of a quantum gravity induced minimal length in the generalised uncertainty principle framework as well as in the coherent state approach to non- commutative geometry. These approaches are effective models which can make model-independent predictions for experiments and are ideally suited for phenomenological studies. Pedagogical examples are provided to grasp the effects of a quantum gravity induced minimal length. (orig.)

  2. The reliability, accuracy and minimal detectable difference of a multi-segment kinematic model of the foot-shoe complex.

    Science.gov (United States)

    Bishop, Chris; Paul, Gunther; Thewlis, Dominic

    2013-04-01

    Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Multiple Memory Structure Bit Reversal Algorithm Based on Recursive Patterns of Bit Reversal Permutation

    Directory of Open Access Journals (Sweden)

    K. K. L. B. Adikaram

    2014-01-01

    Full Text Available With the increasing demand for online/inline data processing efficient Fourier analysis becomes more and more relevant. Due to the fact that the bit reversal process requires considerable processing time of the Fast Fourier Transform (FFT algorithm, it is vital to optimize the bit reversal algorithm (BRA. This paper is to introduce an efficient BRA with multiple memory structures. In 2009, Elster showed the relation between the first and the second halves of the bit reversal permutation (BRP and stated that it may cause serious impact on cache performance of the computer, if implemented. We found exceptions, especially when the said index mapping was implemented with multiple one-dimensional memory structures instead of multidimensional or one-dimensional memory structure. Also we found a new index mapping, even after the recursive splitting of BRP into equal sized slots. The four-array and the four-vector versions of BRA with new index mapping reported 34% and 16% improvement in performance in relation to similar versions of Linear BRA of Elster which uses single one-dimensional memory structure.

  4. Monitoring the informational efficiency of European corporate bond markets with dynamical permutation min-entropy

    Science.gov (United States)

    Zunino, Luciano; Bariviera, Aurelio F.; Guercio, M. Belén; Martinez, Lisana B.; Rosso, Osvaldo A.

    2016-08-01

    In this paper the permutation min-entropy has been implemented to unveil the presence of temporal structures in the daily values of European corporate bond indices from April 2001 to August 2015. More precisely, the informational efficiency evolution of the prices of fifteen sectorial indices has been carefully studied by estimating this information-theory-derived symbolic tool over a sliding time window. Such a dynamical analysis makes possible to obtain relevant conclusions about the effect that the 2008 credit crisis has had on the different European corporate bond sectors. It is found that the informational efficiency of some sectors, namely banks, financial services, insurance, and basic resources, has been strongly reduced due to the financial crisis whereas another set of sectors, integrated by chemicals, automobiles, media, energy, construction, industrial goods & services, technology, and telecommunications has only suffered a transitory loss of efficiency. Last but not least, the food & beverage, healthcare, and utilities sectors show a behavior close to a random walk practically along all the period of analysis, confirming a remarkable immunity against the 2008 financial crisis.

  5. The calculation of sparticle and Higgs decays in the minimal and next-to-minimal supersymmetric standard models: SOFTSUSY4.0

    Science.gov (United States)

    Allanach, B. C.; Cridge, T.

    2017-11-01

    We describe a major extension of the SOFTSUSY spectrum calculator to include the calculation of the decays, branching ratios and lifetimes of sparticles into lighter sparticles, covering the next-to-minimal supersymmetric standard model (NMSSM) as well as the minimal supersymmetric standard model (MSSM). This document acts as a manual for the new version of SOFTSUSY, which includes the calculation of sparticle decays. We present a comprehensive collection of explicit expressions used by the program for the various partial widths of the different decay modes in the appendix. Program Files doi:http://dx.doi.org/10.17632/5hhwwmp43g.1 Licensing provisions: GPLv3 Programming language:C++, fortran Nature of problem: Calculating supersymmetric particle partial decay widths in the MSSM or the NMSSM, given the parameters and spectrum which have already been calculated by SOFTSUSY. Solution method: Analytic expressions for tree-level 2 body decays and loop-level decays and one-dimensional numerical integration for 3 body decays. Restrictions: Decays are calculated in the real R -parity conserving MSSM or the real R -parity conserving NMSSM only. No additional charge-parity violation (CPV) relative to the Standard Model (SM). Sfermion mixing has only been accounted for in the third generation of sfermions in the decay calculation. Decays in the MSSM are 2-body and 3-body, whereas decays in the NMSSM are 2-body only. Does the new version supersede the previous version?: Yes. Reasons for the new version: Significantly extended functionality. The decay rates and branching ratios of sparticles are particularly useful for collider searches. Decays calculated in the NMSSM will be a particularly useful check of the other programs in the literature, of which there are few. Summary of revisions: Addition of the calculation of sparticle and Higgs decays. All 2-body and important 3-body tree-level decays, including phenomenologically important loop-level decays (notably, Higgs decays to

  6. Minimal Length Scale Scenarios for Quantum Gravity.

    Science.gov (United States)

    Hossenfelder, Sabine

    2013-01-01

    We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

  7. Application of mathematical models to metronomic chemotherapy: What can be inferred from minimal parameterized models?

    Science.gov (United States)

    Ledzewicz, Urszula; Schättler, Heinz

    2017-08-10

    Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Large non-Gaussianity in non-minimal inflation

    CERN Document Server

    Gong, Jinn-Ouk

    2011-01-01

    We consider a simple inflation model with a complex scalar field coupled to gravity non-minimally. Both the modulus and the angular directions of the complex scalar are slowly rolling, leading to two-field inflation. The modulus direction becomes flat due to the non-minimal coupling, and the angular direction becomes a pseudo-Goldstone boson from a small breaking of the global U(1) symmetry. We show that large non-Gaussianity can be produced during slow-roll inflation under a reasonable assumption on the initial condition of the angular direction. This scenario may be realized in particle physics models such as the Standard Model with two Higgs doublets.

  9. Extending minimal repair models for repairable systems: A comparison of dynamic and heterogeneous extensions of a nonhomogeneous Poisson process

    International Nuclear Information System (INIS)

    Asfaw, Zeytu Gashaw; Lindqvist, Bo Henry

    2015-01-01

    For many applications of repairable systems, the minimal repair assumption, which leads to nonhomogeneous Poisson processes (NHPP), is not adequate. We review and study two extensions of the NHPP, the dynamic NHPP and the heterogeneous NHPP. Both extensions are motivated by specific aspects of potential applications. It has long been known, however, that the two paradigms are essentially indistinguishable in an analysis of failure data. We investigate the connection between the two approaches for extending NHPP models, both theoretically and numerically in a data example and a simulation study. - Highlights: • Review of dynamic extension of a minimal repair model (LEYP), introduced by Le Gat. • Derivation of likelihood function and comparison to NHPP model with heterogeneity. • Likelihood functions and conditional intensities are similar for the models. • ML estimation is considered for both models using a power law baseline. • A simulation study illustrates and confirms findings of the theoretical study

  10. Construction schedules slack time minimizing

    Science.gov (United States)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  11. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Min

    2010-09-15

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  12. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    International Nuclear Information System (INIS)

    Kim, Ju Min

    2010-09-01

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  13. Index to Nuclear Safety: a technical progress review by chrology, permuted title, and author, Volume 11(1) through Volume 20(6)

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, W B; Passiakos, M

    1980-06-01

    This index to Nuclear Safety, a bimonthly technical progress review, covers articles published in Nuclear Safety, Volume II, No. 1 (January-February 1970), through Volume 20, No. 6 (November-December 1979). It is divided into three sections: a chronological list of articles (including abstracts) followed by a permuted-title (KWIC) index and an author index. Nuclear Safety, a bimonthly technical progress review prepared by the Nuclear Safety Information Center (NSIC), covers all safety aspects of nuclear power reactors and associated facilities. Over 600 technical articles published in Nuclear Safety in the last ten years are listed in this index.

  14. Gravitino problem in minimal supergravity inflation

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Fuminori [Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Mukaida, Kyohei [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Nakayama, Kazunori [Department of Physics, Faculty of Science, The University of Tokyo, Bunkyo-ku, Tokyo 133-0033 (Japan); Terada, Takahiro, E-mail: terada@kias.re.kr [School of Physics, Korea Institute for Advanced Study (KIAS), Seoul 02455 (Korea, Republic of); Yamada, Yusuke [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94305 (United States)

    2017-04-10

    We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  15. Gravitino problem in minimal supergravity inflation

    Directory of Open Access Journals (Sweden)

    Fuminori Hasegawa

    2017-04-01

    Full Text Available We study non-thermal gravitino production in the minimal supergravity inflation. In this minimal model utilizing orthogonal nilpotent superfields, the particle spectrum includes only graviton, gravitino, inflaton, and goldstino. We find that a substantial fraction of the cosmic energy density can be transferred to the longitudinal gravitino due to non-trivial change of its sound speed. This implies either a breakdown of the effective theory after inflation or a serious gravitino problem.

  16. Minimal Length Scale Scenarios for Quantum Gravity

    Directory of Open Access Journals (Sweden)

    Sabine Hossenfelder

    2013-01-01

    Full Text Available We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.

  17. Noether symmetry for non-minimally coupled fermion fields

    International Nuclear Information System (INIS)

    Souza, Rudinei C de; Kremer, Gilberto M

    2008-01-01

    A cosmological model where a fermion field is non-minimally coupled with the gravitational field is studied. By applying Noether symmetry the possible functions for the potential density of the fermion field and for the coupling are determined. Cosmological solutions are found showing that the non-minimally coupled fermion field behaves as an inflaton describing an inflationary scenario, whereas the minimally coupled fermion field describes a decelerated period, behaving as a standard matter field

  18. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  19. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  20. D-term Spectroscopy in Realistic Heterotic-String Models

    CERN Document Server

    Dedes, Athanasios

    2000-01-01

    The emergence of free fermionic string models with solely the MSSM charged spectrum below the string scale provides further evidence to the assertion that the true string vacuum is connected to the Z_2 x Z_2 orbifold in the vicinity of the free fermionic point in the Narain moduli space. An important property of the Z_2 x Z_2 orbifold is the cyclic permutation symmetry between the three twisted sectors. If preserved in the three generations models the cyclic permutation symmetry results in a family universal anomalous U(1)_A, which is instrumental in explaining squark degeneracy, provided that the dominant component of supersymmetry breaking arises from the U(1)_A D-term. Interestingly, the contribution of the family--universal D_A-term to the squark masses may be intra-family non-universal, and may differ from the usual (universal) boundary conditions assumed in the MSSM. We contemplate how D_A--term spectroscopy may be instrumental in studying superstring models irrespective of our ignorance of the details ...

  1. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Das, Kumar; Dutta, Koushik [Theory Division, Saha Institute of Nuclear Physics, 1/AF Saltlake, Kolkata 700064 (India); Domcke, Valerie, E-mail: kumar.das@saha.ac.in, E-mail: valerie.domcke@apc.univ-paris7.fr, E-mail: koushik.dutta@saha.ac.in [AstroParticule et Cosmologie (APC), Paris Centre for Cosmological Physics (PCCP), Université Paris Diderot, 75013 Paris (France)

    2017-03-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes along the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.

  2. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    International Nuclear Information System (INIS)

    Das, Kumar; Dutta, Koushik; Domcke, Valerie

    2017-01-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes along the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.

  3. Supersymmetric Hybrid Inflation with Non-Minimal Kähler potential

    CERN Document Server

    Bastero-Gil, M; Shafi, Q

    2007-01-01

    Minimal supersymmetric hybrid inflation based on a minimal Kahler potential predicts a spectral index n_s\\gsim 0.98. On the other hand, WMAP three year data prefers a central value n_s \\approx 0.95. We propose a class of supersymmetric hybrid inflation models based on the same minimal superpotential but with a non-minimal Kahler potential. Including radiative corrections using the one-loop effective potential, we show that the prediction for the spectral index is sensitive to the small non-minimal corrections, and can lead to a significantly red-tilted spectrum, in agreement with WMAP.

  4. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  5. Bounds on the Higgs mass in the standard model and the minimal supersymmetric standard model

    CERN Document Server

    Quiros, M.

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the {\\bf Standard Model} can develop a non-standard minimum for values of the field much larger than the weak scale. In those cases the standard minimum becomes metastable and the possibility of decay to the non-standard one arises. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M_H, M_t) plane, where the Higgs field is sitting at the standard electroweak minimum. In the {\\bf Minimal Supersymmetric Standard Model}, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out, providing an excellent approximation to the numerical results which include all next-to-leading-log corrections. An appropriate treatment of squark decoupling allows to consider large values of the stop and/or sbottom mixing parameters and thus fix a reliable upper bound on the mass o...

  6. Mathematical models for a batch scheduling problem to minimize earliness and tardiness

    Directory of Open Access Journals (Sweden)

    Basar Ogun

    2018-05-01

    Full Text Available Purpose: Today’s manufacturing facilities are challenged by highly customized products and just in time manufacturing and delivery of these products. In this study, a batch scheduling problem is addressed to provide on-time completion of customer orders in the environment of lean manufacturing. The problem is to optimize partitioning of product components into batches and scheduling of the resulting batches where each customer order is received as a set of products made of various components. Design/methodology/approach: Three different mathematical models for minimization of total earliness and tardiness of customer orders are developed to provide on-time completion of customer orders and also, to avoid from inventory of final products. The first model is a non-linear integer programming model while the second is a linearized version of the first. Finally, to solve larger sized instances of the problem, an alternative linear integer model is presented. Findings: Computational study using a suit set of test instances showed that the alternative linear integer model is able to solve all test instances in varying sizes within quite shorter computer times comparing to the other two models. It was also showed that the alternative model can solve moderate sized real-world problems. Originality/value: The problem under study differentiates from existing batch scheduling problems in the literature since it includes new circumstances which may arise in real-world applications. This research, also, contributes the literature of batch scheduling problem by presenting new optimization models.

  7. Replica Approach for Minimal Investment Risk with Cost

    Science.gov (United States)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  8. The Los Alamos National Laboratory Chemistry and Metallurgy Research Facility upgrades project - A model for waste minimization

    International Nuclear Information System (INIS)

    Burns, M.L.; Durrer, R.E.; Kennicott, M.A.

    1996-07-01

    The Los Alamos National Laboratory (LANL) Chemistry and Metallurgy Research (CMR) Facility, constructed in 1952, is currently undergoing a major, multi-year construction project. Many of the operations required under this project (i.e., design, demolition, decontamination, construction, and waste management) mimic the processes required of a large scale decontamination and decommissioning (D ampersand D) job and are identical to the requirements of any of several upgrades projects anticipated for LANL and other Department of Energy (DOE) sites. For these reasons the CMR Upgrades Project is seen as an ideal model facility - to test the application, and measure the success of - waste minimization techniques which could be brought to bear on any of the similar projects. The purpose of this paper will be to discuss the past, present, and anticipated waste minimization applications at the facility and will focus on the development and execution of the project's open-quotes Waste Minimization/Pollution Prevention Strategic Plan.close quotes

  9. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  10. A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy

    Science.gov (United States)

    Li, Yongbo; Li, Guoyan; Yang, Yuantao; Liang, Xihui; Xu, Minqiang

    2018-05-01

    The fault diagnosis of planetary gearboxes is crucial to reduce the maintenance costs and economic losses. This paper proposes a novel fault diagnosis method based on adaptive multi-scale morphological filter (AMMF) and modified hierarchical permutation entropy (MHPE) to identify the different health conditions of planetary gearboxes. In this method, AMMF is firstly adopted to remove the fault-unrelated components and enhance the fault characteristics. Second, MHPE is utilized to extract the fault features from the denoised vibration signals. Third, Laplacian score (LS) approach is employed to refine the fault features. In the end, the obtained features are fed into the binary tree support vector machine (BT-SVM) to accomplish the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault categories of planetary gearboxes.

  11. Minimally Disruptive Medicine: A Pragmatically Comprehensive Model for Delivering Care to Patients with Multiple Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Aaron L. Leppin

    2015-01-01

    Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.

  12. The Application of the SCT and the ANP Model to Refine the Most Critical ICT Determinants in Minimizing the Digital Divide

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Hsieh

    2014-01-01

    Full Text Available This research cross-employed the social cognition theory (SCT and the analytical network process (ANP model in qualitative analyses and the multiple criteria decision making (MCDM methodology in quantitative measurements to comprehensively re-identify the most critical information communication technology (ICT determinants in minimizing the digital divide (DD. After measuring the complete importance of related priority weight w (eigenvector from the survey results given to various experts, the complete synthetically comparative index numbers (SCIN from the evaluation model used conveyed two distinct consequences. First, the highest SCIN scores for the ANP, F-ANP, and G-ANP models are 0.5516, 0.3771, and 0.4791, respectively, and are located in the “ICT can positively minimize the level of DD” (ICTCPMDD column in the summarized results table. Secondly, the highest evaluated weighted score of assessed criteria of 0.5801 is located in the Diversified Mobile Applications (DMAPP column. From these results, there are two contributive findings that can be deduced. First, ICT indeed proved the determinative influences and interplays in the level of DD which means that the undeveloped, developing, and emerging regions are apparently able to minimize the level of DD by means of the ICT development. Second, the Diversified Mobile Applications (DMAPP is the most critical determinant of ICT in minimizing the level of digital divide.

  13. Minimal Left-Right Symmetric Dark Matter.

    Science.gov (United States)

    Heeck, Julian; Patra, Sudhanwa

    2015-09-18

    We show that left-right symmetric models can easily accommodate stable TeV-scale dark matter particles without the need for an ad hoc stabilizing symmetry. The stability of a newly introduced multiplet either arises accidentally as in the minimal dark matter framework or comes courtesy of the remaining unbroken Z_{2} subgroup of B-L. Only one new parameter is introduced: the mass of the new multiplet. As minimal examples, we study left-right fermion triplets and quintuplets and show that they can form viable two-component dark matter. This approach is, in particular, valid for SU(2)×SU(2)×U(1) models that explain the recent diboson excess at ATLAS in terms of a new charged gauge boson of mass 2 TeV.

  14. A Scalable Permutation Approach Reveals Replication and Preservation Patterns of Network Modules in Large Datasets.

    Science.gov (United States)

    Ritchie, Scott C; Watts, Stephen; Fearnley, Liam G; Holt, Kathryn E; Abraham, Gad; Inouye, Michael

    2016-07-01

    Network modules-topologically distinct groups of edges and nodes-that are preserved across datasets can reveal common features of organisms, tissues, cell types, and molecules. Many statistics to identify such modules have been developed, but testing their significance requires heuristics. Here, we demonstrate that current methods for assessing module preservation are systematically biased and produce skewed p values. We introduce NetRep, a rapid and computationally efficient method that uses a permutation approach to score module preservation without assuming data are normally distributed. NetRep produces unbiased p values and can distinguish between true and false positives during multiple hypothesis testing. We use NetRep to quantify preservation of gene coexpression modules across murine brain, liver, adipose, and muscle tissues. Complex patterns of multi-tissue preservation were revealed, including a liver-derived housekeeping module that displayed adipose- and muscle-specific association with body weight. Finally, we demonstrate the broader applicability of NetRep by quantifying preservation of bacterial networks in gut microbiota between men and women. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. AGT, Burge pairs and minimal models

    International Nuclear Information System (INIS)

    Bershtein, M.; Foda, O.

    2014-01-01

    We consider the AGT correspondence in the context of the conformal field theory M p,p ′ ⊗M H , where M p,p ′ is the minimal model based on the Virasoro algebra V p,p ′ labeled by two co-prime integers {p,p ′ }, 1

  16. Minimal duality breaking in the Kallen-Lehman approach to 3D Ising model: A numerical test

    International Nuclear Information System (INIS)

    Astorino, Marco; Canfora, Fabrizio; Martinez, Cristian; Parisi, Luca

    2008-01-01

    A Kallen-Lehman approach to 3D Ising model is analyzed numerically both at low and high temperatures. It is shown that, even assuming a minimal duality breaking, one can fix three parameters of the model to get a very good agreement with the Monte Carlo results at high temperatures. With the same parameters the agreement is satisfactory both at low and near critical temperatures. How to improve the agreement with Monte Carlo results by introducing a more general duality breaking is shortly discussed

  17. Particle production after inflation with non-minimal derivative coupling to gravity

    International Nuclear Information System (INIS)

    Ema, Yohei; Jinno, Ryusuke; Nakayama, Kazunori; Mukaida, Kyohei

    2015-01-01

    We study cosmological evolution after inflation in models with non-minimal derivative coupling to gravity. The background dynamics is solved and particle production associated with rapidly oscillating Hubble parameter is studied in detail. In addition, production of gravitons through the non-minimal derivative coupling with the inflaton is studied. We also find that the sound speed squared of the scalar perturbation oscillates between positive and negative values when the non-minimal derivative coupling dominates over the minimal kinetic term. This may lead to an instability of this model. We point out that the particle production rates are the same as those in the Einstein gravity with the minimal kinetic term, if we require the sound speed squared is positive definite

  18. Theories of minimalism in architecture: Post scriptum

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2012-01-01

    Full Text Available Owing to the period of intensive development in the last decade of XX century, architectural phenomenon called Minimalism in Architecture was remembered as the Style of the Nineties, which is characterized, morphologically speaking, by simplicity and formal reduction. Simultaneously with its development in practice, on a theoretical level several dominant interpretative models were able to establish themselves. The new millennium and time distance bring new problems; therefore this paper represents a discussion on specific theorization related to Minimalism in Architecture that can bear the designation of post scriptum, because their development starts after the constitutional period of architectural minimalist discourse. In XXI century theories, the problem of definition of minimalism remains important topic, approached by theorists through resolving on the axis: Modernism - Minimal Art - Postmodernism - Minimalism in Architecture. With regard to this, analyzed texts can be categorized in two groups: 1 texts of affirmative nature and historical-associative approach in which minimalism is identified with anything that is simple and reduced, in an idealizing manner, relied mostly on the existing hypotheses; 2 critically oriented texts, in which authors reconsider adequacy of the very term 'minimalism' in the context of architecture and take a metacritical attitude towards previous texts.

  19. Non-minimal Higgs inflation and frame dependence in cosmology

    International Nuclear Information System (INIS)

    Steinwachs, Christian F.; Kamenshchik, Alexander Yu.

    2013-01-01

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  20. Targeting the minimal supersymmetric standard model with the compact muon solenoid experiment

    Science.gov (United States)

    Bein, Samuel Louis

    An interpretation of CMS searches for evidence of supersymmetry in the context of the minimal supersymmetric Standard Model (MSSM) is given. It is found that supersymmetric particles with color charge are excluded in the mass range below about 400 GeV, but neutral and weakly-charged sparticles remain non-excluded in all mass ranges. Discussion of the non-excluded regions of the model parameter space is given, including details on the strengths and weaknesses of existing searches, and recommendations for future analysis strategies. Advancements in the modeling of events arising from quantum chromodynamics and electroweak boson production, which are major backgrounds in searches for new physics at the LHC, are also presented. These methods have been implemented as components of CMS searches for supersymmetry in proton-proton collisions resulting in purely hadronic events (i.e., events with no identified leptons) at a center of momentum energy of 13 TeV. These searches, interpreted in the context of simplified models, exclude supersymmetric gluons (gluinos) up to masses of 1400 to 1600 GeV, depending on the model considered, and exclude scalar top quarks with masses up to about 800 GeV, assuming a massless lightest supersymmetric particle. A search for non-excluded supersymmetry models is also presented, which uses multivariate discriminants to isolate potential signal candidate events. The search achieves sensitivity to new physics models in background-dominated kinematic regions not typically considered by analyses, and rules out supersymmetry models that survived 7 and 8 TeV searches performed by CMS.

  1. A combinatorial and probabilistic study of initial and end heights of descents in samples of geometrically distributed random variables and in permutations

    Directory of Open Access Journals (Sweden)

    Helmut Prodinger

    2007-01-01

    Full Text Available In words, generated by independent geometrically distributed random variables, we study the l th descent, which is, roughly speaking, the l th occurrence of a neighbouring pair ab with a>b. The value a is called the initial height, and b the end height. We study these two random variables (and some similar ones by combinatorial and probabilistic tools. We find in all instances a generating function Ψ(v,u, where the coefficient of v j u i refers to the j th descent (ascent, and i to the initial (end height. From this, various conclusions can be drawn, in particular expected values. In the probabilistic part, a Markov chain model is used, which allows to get explicit expressions for the heights of the second descent. In principle, one could go further, but the complexity of the results forbids it. This is extended to permutations of a large number of elements. Methods from q-analysis are used to simplify the expressions. This is the reason that we confine ourselves to the geometric distribution only. For general discrete distributions, no such tools are available.

  2. Qualitative analysis of cosmological models in Brans-Dicke theory, solutions from non-minimal coupling and viscous universe

    International Nuclear Information System (INIS)

    Romero Filho, C.A.

    1988-01-01

    Using dynamical system theory we investigate homogeneous and isotropic models in Brans-Dicke theory for perfect fluids with general equation of state and arbitrary ω. Phase diagrams are drawn on the Poincare sphere which permits a qualitative analysis of the models. Based on this analysis we construct a method for generating classes of solutions in Brans-Dicke theory. The same technique is used for studying models arising from non-minimal coupling of electromagnetism with gravity. In addition, viscous fluids are considered and non-singular solutions with bulk viscosity are found. (author)

  3. Physics on the smallest scales: an introduction to minimal length phenomenology

    International Nuclear Information System (INIS)

    Sprenger, Martin; Nicolini, Piero; Bleicher, Marcus

    2012-01-01

    Many modern theories which try to unify gravity with the Standard Model of particle physics, such as e.g. string theory, propose two key modifications to the commonly known physical theories: the existence of additional space dimensions; the existence of a minimal length distance or maximal resolution. While extra dimensions have received a wide coverage in publications over the last ten years (especially due to the prediction of micro black hole production at the Large Hadron Collider), the phenomenology of models with a minimal length is still less investigated. In a summer study project for bachelor students in 2010, we have explored some phenomenological implications of the potential existence of a minimal length. In this paper, we review the idea and formalism of a quantum gravity-induced minimal length in the generalized uncertainty principle framework as well as in the coherent state approach to non-commutative geometry. These approaches are effective models which can make model-independent predictions for experiments and are ideally suited for phenomenological studies. Pedagogical examples are provided to grasp the effects of a quantum gravity-induced minimal length. This paper is intended for graduate students and non-specialists interested in quantum gravity. (paper)

  4. Warm inflation with an oscillatory inflaton in the non-minimal kinetic coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Parviz [University of Ayatollah Ozma Borujerdi, Department of Science, Boroujerd (Iran, Islamic Republic of); Sadjadi, H.M. [University of Tehran, Department of Physics, Tehran (Iran, Islamic Republic of)

    2017-07-15

    In the cold inflation scenario, the slow roll inflation and reheating via coherent rapid oscillation, are usually considered as two distinct eras. When the slow roll ends, a rapid oscillation phase begins and the inflaton decays to relativistic particles reheating the Universe. In another model dubbed warm inflation, the rapid oscillation phase is suppressed, and we are left with only a slow roll period during which the reheating occurs. Instead, in this paper, we propose a new picture for inflation in which the slow roll era is suppressed and only the rapid oscillation phase exists. Radiation generation during this era is taken into account, so we have warm inflation with an oscillatory inflaton. To provide enough e-folds, we employ the non-minimal derivative coupling model. We study the cosmological perturbations and compute the temperature at the end of warm oscillatory inflation. (orig.)

  5. Warm inflation with an oscillatory inflaton in the non-minimal kinetic coupling model

    International Nuclear Information System (INIS)

    Goodarzi, Parviz; Sadjadi, H.M.

    2017-01-01

    In the cold inflation scenario, the slow roll inflation and reheating via coherent rapid oscillation, are usually considered as two distinct eras. When the slow roll ends, a rapid oscillation phase begins and the inflaton decays to relativistic particles reheating the Universe. In another model dubbed warm inflation, the rapid oscillation phase is suppressed, and we are left with only a slow roll period during which the reheating occurs. Instead, in this paper, we propose a new picture for inflation in which the slow roll era is suppressed and only the rapid oscillation phase exists. Radiation generation during this era is taken into account, so we have warm inflation with an oscillatory inflaton. To provide enough e-folds, we employ the non-minimal derivative coupling model. We study the cosmological perturbations and compute the temperature at the end of warm oscillatory inflation. (orig.)

  6. Minimal Coleman-Weinberg theory explains the diphoton excess

    DEFF Research Database (Denmark)

    Antipin, Oleg; Mojaza, Matin; Sannino, Francesco

    2016-01-01

    It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require the introdu......It is possible to delay the hierarchy problem, by replacing the standard Higgs-sector by the Coleman-Weinberg mechanism, and at the same time ensure perturbative naturalness through the so-called Veltman conditions. As we showed in a previous study, minimal models of this type require...

  7. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    Science.gov (United States)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  8. Right-handed quark mixings in minimal left-right symmetric model with general CP violation

    International Nuclear Information System (INIS)

    Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.

    2007-01-01

    We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values

  9. Axion dark matter and Planck favor non-minimal couplings to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Folkerts, Sarah, E-mail: sarah.folkerts@lmu.de [Arnold Sommerfeld Center, Ludwig-Maximilians-University, Theresienstr. 37, 80333 München (Germany); Germani, Cristiano, E-mail: cristiano.germani@lmu.de [Arnold Sommerfeld Center, Ludwig-Maximilians-University, Theresienstr. 37, 80333 München (Germany); Redondo, Javier, E-mail: javier.redondo@lmu.de [Arnold Sommerfeld Center, Ludwig-Maximilians-University, Theresienstr. 37, 80333 München (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München (Germany)

    2014-01-20

    Constraints on inflationary scenarios and isocurvature perturbations have excluded the simplest and most generic models of dark matter based on QCD axions. Considering non-minimal kinetic couplings of scalar fields to gravity substantially changes this picture. The axion can account for the observed dark matter density avoiding the overproduction of isocurvature fluctuations. Finally, we show that assuming the same non-minimal kinetic coupling to the axion (dark matter) and to the standard model Higgs boson (inflaton) provides a minimal picture of early time cosmology.

  10. Minimization In Digital Design As A Meta-Planning Problem

    Science.gov (United States)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  11. The quantum group structure of 2D gravity and minimal models. Pt. 1

    International Nuclear Information System (INIS)

    Gervais, J.L.

    1990-01-01

    On the unit circle, an infinite family of chiral operators is constructed, whose exchange algebra is given by the universal R-matrix of the quantum group SL(2) q . This establishes the precise connection between the chiral algebra of two dimensional gravity or minimal models and this quantum group. The method is to relate the monodromy properties of the operator differential equations satisfied by the generalized vertex operators with the exchange algebra of SL(2) q . The formulae so derived, which generalize an earlier particular case worked out by Babelon, are remarkably compact and may be entirely written in terms of 'q-deformed' factorials and binomial coefficients. (orig.)

  12. Non-minimal Higgs inflation and frame dependence in cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Steinwachs, Christian F. [School of Mathematical Sciences, University of Nottingham University Park, Nottingham, NG7 2RD (United Kingdom); Kamenshchik, Alexander Yu. [Dipartimento di Fisica e Astronomia and INFN, Via Irnerio 46, 40126 Bologna, Italy and L.D. Landau Institute for Theoretical Physics of the Russian Academy of Sciences, Kosygin str. 2, 119334 Moscow (Russian Federation)

    2013-02-21

    We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: 'Jordan frame vs. Einstein frame' become more transparent and in principle can be resolved in a natural way.

  13. A bootstrap based space-time surveillance model with an application to crime occurrences

    Science.gov (United States)

    Kim, Youngho; O'Kelly, Morton

    2008-06-01

    This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.

  14. On the Higgs-like boson in the minimal supersymmetric 3-3-1 model

    Science.gov (United States)

    Ferreira, J. G.; Pires, C. A. de S.; da Silva, P. S. Rodrigues; Siqueira, Clarissa

    2018-03-01

    It is imperative that any proposal of new physics beyond the standard model possesses a Higgs-like boson with 125 GeV of mass and couplings with the standard particles that recover the branching ratios and signal strengths as measured by CMS and ATLAS. We address this issue within the supersymmetric version of the minimal 3-3-1 model. For this we develop the Higgs potential with focus on the lightest Higgs provided by the model. Our proposal is to verify if it recovers the properties of the Standard Model Higgs. With respect to its mass, we calculate it up to one loop level by taking into account all contributions provided by the model. In regard to its couplings, we restrict our investigation to couplings of the Higgs-like boson with the standard particles, only. We then calculate the dominant branching ratios and the respective signal strengths and confront our results with the recent measurements of CMS and ATLAS. As distinctive aspects, we remark that our Higgs-like boson intermediates flavor changing neutral processes and has as signature the decay t → h+c. We calculate its branching ratio and compare it with current bounds. We also show that the Higgs potential of the model is stable for the region of parameter space employed in our calculations.

  15. A Singlet Extension of the Minimal Supersymmetric Standard Model: Towards a More Natural Solution to the Little Hierarchy Problem

    Energy Technology Data Exchange (ETDEWEB)

    de la Puente, Alejandro [Univ. of Notre Dame, IN (United States)

    2012-05-01

    In this work, I present a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), with an explicit μ-term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM). I analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. The small and large limits of this mass parameter are studied, and I find that I can generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplings perturbative up to the gauge unification scale, and with no need to fine tune parameters in the scalar potential. This model, which I call the S-MSSM is also embedded in a gauge-mediated supersymmetry breaking scheme. I find that even with a minimal embedding of the S-MSSM into a gauge mediated scheme, the mass for the lightest Higgs boson can easily be above 114 GeV, while keeping the top squarks below the TeV scale. Furthermore, I also study the forward-backward asymmetry in the t¯t system within the framework of the S-MSSM. For this purpose, non-renormalizable couplings between the first and third generation of quarks to scalars are introduced. The two limiting cases of the S-MSSM, characterized by the size of the supersymmetric mass for the singlet superfield is analyzed, and I find that in the region of small singlet supersymmetric mass a large asymmetry can be obtained while being consistent with constraints arising from flavor physics, quark masses and top quark decays.

  16. Electroweak precision observables in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Heinemeyer, S.; Hollik, W.; Weiglein, G.

    2006-01-01

    The current status of electroweak precision observables in the Minimal Supersymmetric Standard Model (MSSM) is reviewed. We focus in particular on the W boson mass, M W , the effective leptonic weak mixing angle, sin 2 θ eff , the anomalous magnetic moment of the muon (g-2) μ , and the lightest CP-even MSSM Higgs boson mass, m h . We summarize the current experimental situation and the status of the theoretical evaluations. An estimate of the current theoretical uncertainties from unknown higher-order corrections and from the experimental errors of the input parameters is given. We discuss future prospects for both the experimental accuracies and the precision of the theoretical predictions. Confronting the precision data with the theory predictions within the unconstrained MSSM and within specific SUSY-breaking scenarios, we analyse how well the data are described by the theory. The mSUGRA scenario with cosmological constraints yields a very good fit to the data, showing a clear preference for a relatively light mass scale of the SUSY particles. The constraints on the parameter space from the precision data are discussed, and it is shown that the prospective accuracy at the next generation of colliders will enhance the sensitivity of the precision tests very significantly

  17. A minimal model of epithelial tissue dynamics and its application to the corneal epithelium

    Science.gov (United States)

    Henkes, Silke; Matoz-Fernandez, Daniel; Kostanjevec, Kaja; Coburn, Luke; Sknepnek, Rastko; Collinson, J. Martin; Martens, Kirsten

    Epithelial cell sheets are characterized by a complex interplay of active drivers, including cell motility, cell division and extrusion. Here we construct a particle-based minimal model tissue with only division/death dynamics and show that it always corresponds to a liquid state with a single dynamic time scale set by the division rate, and that no glassy phase is possible. Building on this, we construct an in-silico model of the mammalian corneal epithelium as such a tissue confined to a hemisphere bordered by the limbal stem cell zone. With added cell motility dynamics we are able to explain the steady-state spiral migration on the cornea, including the central vortex defect, and quantitatively compare it to eyes obtained from mice that are X-inactivation mosaic for LacZ.

  18. Permutation Tests of Hierarchical Cluster Analyses of Carrion Communities and Their Potential Use in Forensic Entomology.

    Science.gov (United States)

    van der Ham, Joris L

    2016-05-19

    Forensic entomologists can use carrion communities' ecological succession data to estimate the postmortem interval (PMI). Permutation tests of hierarchical cluster analyses of these data provide a conceptual method to estimate part of the PMI, the post-colonization interval (post-CI). This multivariate approach produces a baseline of statistically distinct clusters that reflect changes in the carrion community composition during the decomposition process. Carrion community samples of unknown post-CIs are compared with these baseline clusters to estimate the post-CI. In this short communication, I use data from previously published studies to demonstrate the conceptual feasibility of this multivariate approach. Analyses of these data produce series of significantly distinct clusters, which represent carrion communities during 1- to 20-day periods of the decomposition process. For 33 carrion community samples, collected over an 11-day period, this approach correctly estimated the post-CI within an average range of 3.1 days. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Opportunity Loss Minimization and Newsvendor Behavior

    Directory of Open Access Journals (Sweden)

    Xinsheng Xu

    2017-01-01

    Full Text Available To study the decision bias in newsvendor behavior, this paper introduces an opportunity loss minimization criterion into the newsvendor model with backordering. We apply the Conditional Value-at-Risk (CVaR measure to hedge against the potential risks from newsvendor’s order decision. We obtain the optimal order quantities for a newsvendor to minimize the expected opportunity loss and CVaR of opportunity loss. It is proven that the newsvendor’s optimal order quantity is related to the density function of market demand when the newsvendor exhibits risk-averse preference, which is inconsistent with the results in Schweitzer and Cachon (2000. The numerical example shows that the optimal order quantity that minimizes CVaR of opportunity loss is bigger than expected profit maximization (EPM order quantity for high-profit products and smaller than EPM order quantity for low-profit products, which is different from the experimental results in Schweitzer and Cachon (2000. A sensitivity analysis of changing the operation parameters of the two optimal order quantities is discussed. Our results confirm that high return implies high risk, while low risk comes with low return. Based on the results, some managerial insights are suggested for the risk management of the newsvendor model with backordering.

  20. Minimal unitary representation of D(2,1;λ) and its SU(2) deformations and d=1, N=4 superconformal models

    International Nuclear Information System (INIS)

    Govil, Karan; Gunaydin, Murat

    2013-01-01

    Quantization of the geometric quasiconformal realizations of noncompact groups and supergroups leads directly to their minimal unitary representations (minreps). Using quasiconformal methods massless unitary supermultiplets of superconformal groups SU(2,2|N) and OSp(8 ⁎ |2n) in four and six dimensions were constructed as minreps and their U(1) and SU(2) deformations, respectively. In this paper we extend these results to SU(2) deformations of the minrep of N=4 superconformal algebra D(2,1;λ) in one dimension. We find that SU(2) deformations can be achieved using n pair of bosons and m pairs of fermions simultaneously. The generators of deformed minimal representations of D(2,1;λ) commute with the generators of a dual superalgebra OSp(2n ⁎ |2m) realized in terms of these bosons and fermions. We show that there exists a precise mapping between symmetry generators of N=4 superconformal models in harmonic superspace studied recently and minimal unitary supermultiplets of D(2,1;λ) deformed by a pair of bosons. This can be understood as a particular case of a general mapping between the spectra of quantum mechanical quaternionic Kähler sigma models with eight super symmetries and minreps of their isometry groups that descends from the precise mapping established between the 4d, N=2 sigma models coupled to supergravity and minreps of their isometry groups.

  1. MSSM (Minimal Supersymmetric Standard Model) Dark Matter Without Prejudice

    International Nuclear Information System (INIS)

    Gainer, James S.

    2009-01-01

    Recently we examined a large number of points in a 19-dimensional parameter subspace of the CP-conserving MSSM with Minimal Flavor Violation. We determined whether each of these points satisfied existing theoretical, experimental, and observational constraints. Here we discuss the properties of the parameter space points allowed by existing data that are relevant for dark matter searches.

  2. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    Science.gov (United States)

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) dysfunction due to degenerative sacroiliitis or SIJ disruption.

  3. On the convergence of nonconvex minimization methods for image recovery.

    Science.gov (United States)

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  4. On the Gauged Kahler Isometry in Minimal Supergravity Models of Inflation

    CERN Document Server

    Ferrara, Sergio; Sorin, Alexander S.

    2014-01-01

    In this paper we address the question how to discriminate whether the gauged isometry group G_Sigma of the Kahler manifold Sigma that produces a D-type inflaton potential in a Minimal Supergravity Model is elliptic, hyperbolic or parabolic. We show that the classification of isometries of symmetric cosets can be extended to non symmetric Sigma.s if these manifolds satisfy additional mathematical restrictions. The classification criteria established in the mathematical literature are coherent with simple criteria formulated in terms of the asymptotic behavior of the Kahler potential K(C) = 2 J(C) where the real scalar field C encodes the inflaton field. As a by product of our analysis we show that all phenomenologically admissible potentials for the description of inflation and in particular alpha-attractors are mostly obtained from the gauging of a parabolic isometry. The requirement of regularity of the manifold Sigma poses strong constraints on the alpha-attractors and reduces their space considerably. Curi...

  5. Theories of minimalism in architecture: When prologue becomes palimpsest

    Directory of Open Access Journals (Sweden)

    Stevanović Vladimir

    2014-01-01

    Full Text Available This paper examines the modus and conditions of constituting and establishing architectural discourse on minimalism. One of the key topics in this discourse are historical line of development and the analysis of theoretical influences, which comprise connections of recent minimalism with the theorizations of various minimal, architectural and artistic, forms and concepts from the past. The paper shall particularly discuss those theoretical relations which, in a unitary way, link minimalism in architecture with its artistic nominal counterpart - minimal art. These are the relations founded on the basis of interpretative models on self-referentiality, phenomenological experience and contextualism, which are superficialy observed, common to both, artistic and architectural, minimalist discourses. It seems that in this constellation certain relations on the historical line of minimalism in architecture are questionable, while some other are overlooked. Precisely, posmodern fundamentalism is the architectural direction: 1 in which these three interpretations also existed; 2 from which architectural theorists retroactively appropriated many architects proclaiming them minimalists; 3 which establish identical relations with modern and postmodern theoretical and socio-historical contexts, as well as it will be done in minimalism. In spite of this, theoretical field of postmodern fundamentalism is surprisingly neglected in the discourse of minimalism in architecture. Instead of understanding postmodern fundamentalism as a kind of prologue to minimalism in architecture, it becomes an erased palimpsest over whom the different history of minimalism is rewriting, the history in which minimal art which occupies a central place.

  6. On 3D Minimal Massive Gravity

    CERN Document Server

    Alishahiha, Mohsen; Naseh, Ali; Shirzad, Ahmad

    2014-12-03

    We study linearized equations of motion of the newly proposed three dimensional gravity, known as minimal massive gravity, using its metric formulation. We observe that the resultant linearized equations are exactly the same as that of TMG by making use of a redefinition of the parameters of the model. In particular the model admits logarithmic modes at the critical points. We also study several vacuum solutions of the model, specially at a certain limit where the contribution of Chern-Simons term vanishes.

  7. Application of a minimal glacier model to Hansbreen, Svalbard

    Directory of Open Access Journals (Sweden)

    J. Oerlemans

    2011-01-01

    Full Text Available Hansbreen is a well studied tidewater glacier in the southwestern part of Svalbard, currently about 16 km long. Since the end of the 19th century it has been retreating over a distance of 2.7 km. In this paper the global dynamics of Hansbreen are studied with a minimal glacier model, in which the ice mechanics are strongly parameterised and a simple law for iceberg calving is used. The model is calibrated by reconstructing a climate history in such a way that observed and simulated glacier length match. In addition, the calving law is tuned to reproduce the observed mean calving flux for the period 2000–2008.

    Equilibrium states are studied for a wide range of values of the equilibrium line altitude. The dynamics of the glacier are strongly nonlinear. The height-mass balance feedback and the water depth-calving flux feedback give rise to cusp catastrophes in the system.

    For the present climatic conditions Hansbreen cannot survive. Depending on the imposed climate change scenario, in AD 2100 Hansbreen is predicted to have a length between 10 and 12 km. The corresponding decrease in ice volume (relative to the volume in AD 2000 is 45 to 65%.

    Finally the late-Holocene history of Hansbreen is considered. We quote evidence from dated peat samples that Hansbreen did not exist during the Holocene Climatic Optimum. We speculate that at the end of the mid-Holocene Climatic Optimum Hansbreen could advance because the glacier bed was at least 50 m higher than today, and because the tributary glaciers on the western side may have supplied a significant amount of mass to the main stream. The excavation of the overdeepening and the formation of the shoal at the glacier terminus probably took place during the Little Ice Age.

  8. Forest Disturbance Mapping Using Dense Synthetic Landsat/MODIS Time-Series and Permutation-Based Disturbance Index Detection

    Directory of Open Access Journals (Sweden)

    David Frantz

    2016-03-01

    Full Text Available Spatio-temporal information on process-based forest loss is essential for a wide range of applications. Despite remote sensing being the only feasible means of monitoring forest change at regional or greater scales, there is no retrospectively available remote sensor that meets the demand of monitoring forests with the required spatial detail and guaranteed high temporal frequency. As an alternative, we employed the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM to produce a dense synthetic time series by fusing Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS nadir Bidirectional Reflectance Distribution Function (BRDF adjusted reflectance. Forest loss was detected by applying a multi-temporal disturbance detection approach implementing a Disturbance Index-based detection strategy. The detection thresholds were permutated with random numbers for the normal distribution in order to generate a multi-dimensional threshold confidence area. As a result, a more robust parameterization and a spatially more coherent detection could be achieved. (i The original Landsat time series; (ii synthetic time series; and a (iii combined hybrid approach were used to identify the timing and extent of disturbances. The identified clearings in the Landsat detection were verified using an annual woodland clearing dataset from Queensland’s Statewide Landcover and Trees Study. Disturbances caused by stand-replacing events were successfully identified. The increased temporal resolution of the synthetic time series indicated promising additional information on disturbance timing. The results of the hybrid detection unified the benefits of both approaches, i.e., the spatial quality and general accuracy of the Landsat detection and the increased temporal information of synthetic time series. Results indicated that a temporal improvement in the detection of the disturbance date could be achieved relative to the irregularly spaced Landsat

  9. Bs0–B-bars0 mixing within minimal flavor-violating two-Higgs-doublet models

    International Nuclear Information System (INIS)

    Chang, Qin; Li, Pei-Fu; Li, Xin-Qiang

    2015-01-01

    In the “Higgs basis” for a generic 2HDM, only one scalar doublet gets a nonzero vacuum expectation value and, under the criterion of minimal flavor violation, the other one is fixed to be either color-singlet or color-octet, which are named as the type-III and type-C models, respectively. In this paper, the charged-Higgs effects of these two models on B s 0 –B -bar s 0 mixing are studied. First of all, we perform a complete one-loop computation of the electro-weak corrections to the amplitudes of B s 0 –B -bar s 0 mixing. Together with the up-to-date experimental measurements, a detailed phenomenological analysis is then performed in the cases of both real and complex Yukawa couplings of charged scalars to quarks. The spaces of model parameters allowed by the current experimental data on B s 0 –B -bar s 0 mixing are obtained and the differences between type-III and type-C models are investigated, which is helpful to distinguish between these two models

  10. Minimal variance hedging of natural gas derivatives in exponential Lévy models: Theory and empirical performance

    International Nuclear Information System (INIS)

    Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen

    2013-01-01

    We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks

  11. Design and Modelling of Sustainable Bioethanol Supply Chain by Minimizing the Total Ecological Footprint in Life Cycle Perspective

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    manners in bioethanol systems, this study developed a model for designing the most sustainable bioethanol supply chain by minimizing the total ecological footprint under some prerequisite constraints including satisfying the goal of the stakeholders', the limitation of resources and energy, the capacity......The purpose of this paper is to develop a model for designing the most sustainable bioethanol supply chain. Taking into consideration of the possibility of multiple-feedstock, multiple transportation modes, multiple alternative technologies, multiple transport patterns and multiple waste disposal...

  12. Graphical Gaussian models with edge and vertex symmetries

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Lauritzen, Steffen L

    2008-01-01

    We introduce new types of graphical Gaussian models by placing symmetry restrictions on the concentration or correlation matrix. The models can be represented by coloured graphs, where parameters that are associated with edges or vertices of the same colour are restricted to being identical. We...... study the properties of such models and derive the necessary algorithms for calculating maximum likelihood estimates. We identify conditions for restrictions on the concentration and correlation matrices being equivalent. This is for example the case when symmetries are generated by permutation...

  13. AGT, Burge pairs and minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Bershtein, M. [Landau Institute for Theoretical Physics,Chernogolovka (Russian Federation); Institute for Information Transmission Problems,Moscow (Russian Federation); National Research University Higher School of Economics, International Laboratory of Representation Theory and Mathematical Physics, Independent University of Moscow, Moscow (Russian Federation); Foda, O. [Mathematics and Statistics, University of Melbourne,Parkville, VIC 3010 (Australia)

    2014-06-30

    We consider the AGT correspondence in the context of the conformal field theory M{sup p,p{sup ′}}⊗M{sup H}, where M{sup p,p{sup ′}} is the minimal model based on the Virasoro algebra V{sup p,p{sup ′}} labeled by two co-prime integers {p,p"′}, 1

  14. Minimal Regge model for meson--baryon scattering: duality, SU(3) and phase-modified absorptive cuts

    International Nuclear Information System (INIS)

    Egli, S.E.

    1975-10-01

    A model is presented which incorporates economically all of the modifications to simple SU(3)-symmetric dual Regge pole theory which are required by existing data on 0 -1 / 2 + → -1 / 2 + processes. The basic assumptions are no-exotics duality, minimally broken SU(3) symmetry, and absorptive Regge cuts phase-modified by the Ringland prescription. First it is described qualitatively how these assumptions suffice for the description of all measured reactions, and then the results of a detailed fit to 1987 data points are presented for 18 different reactions. (auth)

  15. Casimir effect at finite temperature for pure-photon sector of the minimal Standard Model Extension

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.F., E-mail: alesandroferreira@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900, Cuiabá, Mato Grosso (Brazil); Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada); Khanna, Faqir C., E-mail: khannaf@uvic.ca [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada)

    2016-12-15

    Dynamics between particles is governed by Lorentz and CPT symmetry. There is a violation of Parity (P) and CP symmetry at low levels. The unified theory, that includes particle physics and quantum gravity, may be expected to be covariant with Lorentz and CPT symmetry. At high enough energies, will the unified theory display violation of any symmetry? The Standard Model Extension (SME), with Lorentz and CPT violating terms, has been suggested to include particle dynamics. The minimal SME in the pure photon sector is considered in order to calculate the Casimir effect at finite temperature.

  16. Minimally invasive orthognathic surgery.

    Science.gov (United States)

    Resnick, Cory M; Kaban, Leonard B; Troulis, Maria J

    2009-02-01

    Minimally invasive surgery is defined as the discipline in which operative procedures are performed in novel ways to diminish the sequelae of standard surgical dissections. The goals of minimally invasive surgery are to reduce tissue trauma and to minimize bleeding, edema, and injury, thereby improving the rate and quality of healing. In orthognathic surgery, there are two minimally invasive techniques that can be used separately or in combination: (1) endoscopic exposure and (2) distraction osteogenesis. This article describes the historical developments of the fields of orthognathic surgery and minimally invasive surgery, as well as the integration of the two disciplines. Indications, techniques, and the most current outcome data for specific minimally invasive orthognathic surgical procedures are presented.

  17. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  18. On the gauged Kaehler isometry in minimal supergravity models of inflation

    International Nuclear Information System (INIS)

    Ferrara, S.; Fre, P.; Sorin, A.S.

    2014-01-01

    In this paper we address the question how to discriminate whether the gauged isometry group G Σ of the Kaehler manifold Σ that produces a D-type inflaton potential in a Minimal Supergravity Model is elliptic, hyperbolic or parabolic. We show that the classification of isometries of symmetric cosets can be extended to non symmetric Σ.s if these manifolds satisfy additional mathematical restrictions. The classification criteria established in the mathematical literature are coherent with simple criteria formulated in terms of the asymptotic behavior of the Kaehler potential K(C) = 2 J(C) where the real scalar field C encodes the inflaton field. As a by product of our analysis we show that phenomenologically admissible potentials for the description of inflation and in particular α-attractors are mostly obtained from the gauging of a parabolic isometry, this being, in particular the case of the Starobinsky model. Yet at least one exception exists of an elliptic α-attractor, so that neither type of isometry can be a priori excluded. The requirement of regularity of the manifold Σ poses instead strong constraints on the α-attractors and reduces their space considerably. Curiously there is a unique integrable α-attractor corresponding to a particular value of this parameter. (Copyright copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  19. Minimally invasive prediction of ScvO2 in high-risk surgery : The introduction of a model Index of Oxygenation

    NARCIS (Netherlands)

    de Grooth, Harm-Jan S.; Vos, Jaap Jan; Scheeren, Thomas; van Beest, Paul

    2014-01-01

    INTRODUCTION: The purpose of this study was to examine the trilateral relationship between cardiac index (CI), tissue oxygen saturation (StO2) and central venous oxygen saturation (ScvO2) and subsequently develop a model to predict ScvO2 on minimal invasive manner in patients undergoing major

  20. Non-unitary neutrino mixing and CP violation in the minimal inverse seesaw model

    International Nuclear Information System (INIS)

    Malinsky, Michal; Ohlsson, Tommy; Xing, Zhi-zhong; Zhang He

    2009-01-01

    We propose a simplified version of the inverse seesaw model, in which only two pairs of the gauge-singlet neutrinos are introduced, to interpret the observed neutrino mass hierarchy and lepton flavor mixing at or below the TeV scale. This 'minimal' inverse seesaw scenario (MISS) is technically natural and experimentally testable. In particular, we show that the effective parameters describing the non-unitary neutrino mixing matrix are strongly correlated in the MISS, and thus, their upper bounds can be constrained by current experimental data in a more restrictive way. The Jarlskog invariants of non-unitary CP violation are calculated, and the discovery potential of such new CP-violating effects in the near detector of a neutrino factory is discussed.