WorldWideScience

Sample records for model minimal permutations

  1. Permutations

    International Nuclear Information System (INIS)

    Arnold, Vladimir I

    2009-01-01

    Decompositions into cycles for random permutations of a large number of elements are very different (in their statistics) from the same decompositions for algebraic permutations (defined by linear or projective transformations of finite sets). This paper presents tables giving both these and other statistics, as well as a comparison of them with the statistics of involutions or permutations with all their cycles of even length. The inclusions of a point in cycles of various lengths turn out to be equiprobable events for random permutations. The number of permutations of 2N elements with all cycles of even length turns out to be the square of an integer (namely, of (2N-1)!!). The number of cycles of projective permutations (over a field with an odd prime number of elements) is always even. These and other empirically discovered theorems are proved in the paper. Bibliography: 6 titles.

  2. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan

    International Nuclear Information System (INIS)

    Lian Zhigang; Gu Xingsheng; Jiao Bin

    2008-01-01

    It is well known that the flow-shop scheduling problem (FSSP) is a branch of production scheduling and is NP-hard. Now, many different approaches have been applied for permutation flow-shop scheduling to minimize makespan, but current algorithms even for moderate size problems cannot be solved to guarantee optimality. Some literatures searching PSO for continuous optimization problems are reported, but papers searching PSO for discrete scheduling problems are few. In this paper, according to the discrete characteristic of FSSP, a novel particle swarm optimization (NPSO) algorithm is presented and successfully applied to permutation flow-shop scheduling to minimize makespan. Computation experiments of seven representative instances (Taillard) based on practical data were made, and comparing the NPSO with standard GA, we obtain that the NPSO is clearly more efficacious than standard GA for FSSP to minimize makespan

  3. A permutation test for the race model inequality

    DEFF Research Database (Denmark)

    Gondan, Matthias

    2010-01-01

    signals. Several statistical procedures have been used for testing the race model inequality. However, the commonly employed procedure does not control the Type I error. In this article a permutation test is described that keeps the Type I error at the desired level. Simulations show that the power...

  4. A discrete firefly meta-heuristic with local search for makespan minimization in permutation flow shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Nader Ghaffari-Nasab

    2010-07-01

    Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.

  5. Tensor models, Kronecker coefficients and permutation centralizer algebras

    Science.gov (United States)

    Geloun, Joseph Ben; Ramgoolam, Sanjaye

    2017-11-01

    We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

  6. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  7. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  8. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  9. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  10. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  11. Minimal model holography

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R; Gopakumar, Rajesh

    2013-01-01

    We review the duality relating 2D W N minimal model conformal field theories, in a large-N ’t Hooft like limit, to higher spin gravitational theories on AdS 3 . This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Higher spin theories and holography’. (review)

  12. A non-permutation flowshop scheduling problem with lot streaming: A Mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Rossit

    2016-06-01

    Full Text Available In this paper we investigate the use of lot streaming in non-permutation flowshop scheduling problems. The objective is to minimize the makespan subject to the standard flowshop constraints, but where it is now permitted to reorder jobs between machines. In addition, the jobs can be divided into manageable sublots, a strategy known as lot streaming. Computational experiments show that lot streaming reduces the makespan up to 43% for a wide range of instances when compared to the case in which no job splitting is applied. The benefits grow as the number of stages in the production process increases but reach a limit. Beyond a certain point, the division of jobs into additional sublots does not improve the solution.

  13. Minimal conformal model

    Energy Technology Data Exchange (ETDEWEB)

    Helmboldt, Alexander; Humbert, Pascal; Lindner, Manfred; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2016-07-01

    The gauge hierarchy problem is one of the crucial drawbacks of the standard model of particle physics (SM) and thus has triggered model building over the last decades. Its most famous solution is the introduction of low-scale supersymmetry. However, without any significant signs of supersymmetric particles at the LHC to date, it makes sense to devise alternative mechanisms to remedy the hierarchy problem. One such mechanism is based on classically scale-invariant extensions of the SM, in which both the electroweak symmetry and the (anomalous) scale symmetry are broken radiatively via the Coleman-Weinberg mechanism. Apart from giving an introduction to classically scale-invariant models, the talk presents our results on obtaining a theoretically consistent minimal extension of the SM, which reproduces the correct low-scale phenomenology.

  14. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    Science.gov (United States)

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  16. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  17. Permutation groups

    CERN Document Server

    Passman, Donald S

    2012-01-01

    This volume by a prominent authority on permutation groups consists of lecture notes that provide a self-contained account of distinct classification theorems. A ready source of frequently quoted but usually inaccessible theorems, it is ideally suited for professional group theorists as well as students with a solid background in modern algebra.The three-part treatment begins with an introductory chapter and advances to an economical development of the tools of basic group theory, including group extensions, transfer theorems, and group representations and characters. The final chapter feature

  18. Permutation orbifolds

    International Nuclear Information System (INIS)

    Bantay, P.

    2002-01-01

    A general theory of permutation orbifolds is developed for arbitrary twist groups. Explicit expressions for the number of primaries, the partition function, the genus one characters, the matrix elements of modular transformations and for fusion rule coefficients are presented, together with the relevant mathematical concepts, such as Λ-matrices and twisted dimensions. The arithmetic restrictions implied by the theory for the allowed modular representations in CFT are discussed. The simplest nonabelian example with twist group S 3 is described to illustrate the general theory

  19. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    Science.gov (United States)

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  1. Minimal models of multidimensional computations.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Fitzgerald

    2011-03-01

    Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.

  2. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  3. Permutation orbifolds and chaos

    NARCIS (Netherlands)

    Belin, A.

    2017-01-01

    We study out-of-time-ordered correlation functions in permutation orbifolds at large central charge. We show that they do not decay at late times for arbitrary choices of low-dimension operators, indicating that permutation orbifolds are non-chaotic theories. This is in agreement with the fact they

  4. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  5. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  6. Invalid Permutation Tests

    Directory of Open Access Journals (Sweden)

    Mikel Aickin

    2010-01-01

    Full Text Available Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

  7. Minimalism in Inflation Model Building

    CERN Document Server

    Dvali, Gia; Dvali, Gia; Riotto, Antonio

    1998-01-01

    In this paper we demand that a successfull inflationary scenario should follow from a model entirely motivated by particle physics considerations. We show that such a connection is indeed possible within the framework of concrete supersymmetric Grand Unified Theories where the doublet-triplet splitting problem is naturally solved. The Fayet-Iliopoulos D-term of a gauge $U(1)_{\\xi}$ symmetry, which plays a crucial role in the solution of the doublet-triplet splitting problem, simultaneously provides a built-in inflationary slope protected from dangerous supergravity corrections.

  8. Minimalism in inflation model building

    Science.gov (United States)

    Dvali, Gia; Riotto, Antonio

    1998-01-01

    In this paper we demand that a successful inflationary scenario should follow from a model entirely motivated by particle physics considerations. We show that such a connection is indeed possible within the framework of concrete supersymmetric Grand Unified Theories where the doublet-triplet splitting problem is naturally solved. The Fayet-Iliopoulos D-term of a gauge U(1)ξ symmetry, which plays a crucial role in the solution of the doublet-triplet splitting problem, simultaneously provides a built-in inflationary slope protected from dangerous supergravity corrections.

  9. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  10. Permutations of massive vacua

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine [Department of Physics, Universidad de Oviedo, Avenida Calvo Sotelo 18, 33007 Oviedo (Spain); Troost, Jan [Laboratoire de Physique Théorique de l’É cole Normale Supérieure, CNRS,PSL Research University, Sorbonne Universités, 75005 Paris (France)

    2017-05-09

    We discuss the permutation group G of massive vacua of four-dimensional gauge theories with N=1 supersymmetry that arises upon tracing loops in the space of couplings. We concentrate on superconformal N=4 and N=2 theories with N=1 supersymmetry preserving mass deformations. The permutation group G of massive vacua is the Galois group of characteristic polynomials for the vacuum expectation values of chiral observables. We provide various techniques to effectively compute characteristic polynomials in given theories, and we deduce the existence of varying symmetry breaking patterns of the duality group depending on the gauge algebra and matter content of the theory. Our examples give rise to interesting field extensions of spaces of modular forms.

  11. Patterns in Permutations and Words

    CERN Document Server

    Kitaev, Sergey

    2011-01-01

    There has been considerable interest recently in the subject of patterns in permutations and words, a new branch of combinatorics with its roots in the works of Rotem, Rogers, and Knuth in the 1970s. Consideration of the patterns in question has been extremely interesting from the combinatorial point of view, and it has proved to be a useful language in a variety of seemingly unrelated problems, including the theory of Kazhdan--Lusztig polynomials, singularities of Schubert varieties, interval orders, Chebyshev polynomials, models in statistical mechanics, and various sorting algorithms, inclu

  12. Modular invariance of N=2 minimal models

    International Nuclear Information System (INIS)

    Sidenius, J.

    1991-01-01

    We prove modular covariance of one-point functions at one loop in the diagonal N=2 minimal superconformal models. We use the recently derived general formalism for computing arbitrary conformal blocks in these models. Our result should be sufficient to guarantee modular covariance at arbitrary genus. It is thus an important check on the general formalism which is not manifestly modular covariant. (orig.)

  13. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  14. Matrix factorizations, minimal models and Massey products

    International Nuclear Information System (INIS)

    Knapp, Johanna; Omer, Harun

    2006-01-01

    We present a method to compute the full non-linear deformations of matrix factorizations for ADE minimal models. This method is based on the calculation of higher products in the cohomology, called Massey products. The algorithm yields a polynomial ring whose vanishing relations encode the obstructions of the deformations of the D-branes characterized by these matrix factorizations. This coincides with the critical locus of the effective superpotential which can be computed by integrating these relations. Our results for the effective superpotential are in agreement with those obtained from solving the A-infinity relations. We point out a relation to the superpotentials of Kazama-Suzuki models. We will illustrate our findings by various examples, putting emphasis on the E 6 minimal model

  15. Periodical cicadas: A minimal automaton model

    Science.gov (United States)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  16. Minimal model for spoof acoustoelastic surface states

    Directory of Open Access Journals (Sweden)

    J. Christensen

    2014-12-01

    Full Text Available Similar to textured perfect electric conductors for electromagnetic waves sustaining artificial or spoof surface plasmons we present an equivalent phenomena for the case of sound. Aided by a minimal model that is able to capture the complex wave interaction of elastic cavity modes and airborne sound radiation in perfect rigid panels, we construct designer acoustoelastic surface waves that are entirely controlled by the geometrical environment. Comparisons to results obtained by full-wave simulations confirm the feasibility of the model and we demonstrate illustrative examples such as resonant transmissions and waveguiding to show a few examples of many where spoof elastic surface waves are useful.

  17. Visual recognition of permuted words

    Science.gov (United States)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  18. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  19. Fusion algebras of logarithmic minimal models

    International Nuclear Information System (INIS)

    Rasmussen, Joergen; Pearce, Paul A

    2007-01-01

    We present explicit conjectures for the chiral fusion algebras of the logarithmic minimal models LM(p,p') considering Virasoro representations with no enlarged or extended symmetry algebra. The generators of fusion are countably infinite in number but the ensuing fusion rules are quasi-rational in the sense that the fusion of a finite number of representations decomposes into a finite direct sum of representations. The fusion rules are commutative, associative and exhibit an sl(2) structure but require so-called Kac representations which are typically reducible yet indecomposable representations of rank 1. In particular, the identity of the fundamental fusion algebra p ≠ 1 is a reducible yet indecomposable Kac representation of rank 1. We make detailed comparisons of our fusion rules with the results of Gaberdiel and Kausch for p = 1 and with Eberle and Flohr for (p, p') = (2, 5) corresponding to the logarithmic Yang-Lee model. In the latter case, we confirm the appearance of indecomposable representations of rank 3. We also find that closure of a fundamental fusion algebra is achieved without the introduction of indecomposable representations of rank higher than 3. The conjectured fusion rules are supported, within our lattice approach, by extensive numerical studies of the associated integrable lattice models. Details of our lattice findings and numerical results will be presented elsewhere. The agreement of our fusion rules with the previous fusion rules lends considerable support for the identification of the logarithmic minimal models LM(p,p') with the augmented c p,p' (minimal) models defined algebraically

  20. The minimal curvaton-Higgs model

    Energy Technology Data Exchange (ETDEWEB)

    Enqvist, Kari [Helsinki Univ. and Helsinki Institute of Physics (Finland). Physics Dept.; Lerner, Rose N. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Helsinki Univ. and Helsinki Institute of Physics (Finland). Physics Dept.; Takahashi, Tomo [Saga Univ. (Japan). Dept. of Physics

    2013-10-15

    We present the first full study of the minimal curvaton-Higgs (MCH) model, which is a minimal interpretation of the curvaton scenario with one real scalar coupled to the standard model Higgs boson. The standard model coupling allows the dynamics of the model to be determined in detail, including effects from the thermal background and from radiative corrections to the potential. The relevant mechanisms for curvaton decay are incomplete non-perturbative decay (delayed by thermal blocking), followed by decay via a dimension-5 non-renormalisable operator. To avoid spoiling the predictions of big bang nucleosynthesis, we find the ''bare'' curvaton mass to be m{sub {sigma}}{>=}8 x 10{sup 4} GeV. To match observational data from Planck there is an upper limit on the curvaton-higgs coupling g, between 10{sup -3} and 10{sup -2}, depending on the mass. This is due to interactions with the thermal background. We find that typically non-Gaussianities are small but that if f{sub NL} is observed in the near future then m{sub {sigma}}model, the lower bound on m{sub {sigma}} can increase substantially. The parameter space may also be affected once the baryogenesis mechanism is specified.

  1. The minimal curvaton-Higgs model

    International Nuclear Information System (INIS)

    Enqvist, Kari; Lerner, Rose N.; Helsinki Univ. and Helsinki Institute of Physics; Takahashi, Tomo

    2013-10-01

    We present the first full study of the minimal curvaton-Higgs (MCH) model, which is a minimal interpretation of the curvaton scenario with one real scalar coupled to the standard model Higgs boson. The standard model coupling allows the dynamics of the model to be determined in detail, including effects from the thermal background and from radiative corrections to the potential. The relevant mechanisms for curvaton decay are incomplete non-perturbative decay (delayed by thermal blocking), followed by decay via a dimension-5 non-renormalisable operator. To avoid spoiling the predictions of big bang nucleosynthesis, we find the ''bare'' curvaton mass to be m σ ≥8 x 10 4 GeV. To match observational data from Planck there is an upper limit on the curvaton-higgs coupling g, between 10 -3 and 10 -2 , depending on the mass. This is due to interactions with the thermal background. We find that typically non-Gaussianities are small but that if f NL is observed in the near future then m σ 9 GeV, depending on Hubble scale during inflation. In a thermal dark matter model, the lower bound on m σ can increase substantially. The parameter space may also be affected once the baryogenesis mechanism is specified.

  2. Infinite permutations vs. infinite words

    Directory of Open Access Journals (Sweden)

    Anna E. Frid

    2011-08-01

    Full Text Available I am going to compare well-known properties of infinite words with those of infinite permutations, a new object studied since middle 2000s. Basically, it was Sergey Avgustinovich who invented this notion, although in an early study by Davis et al. permutations appear in a very similar framework as early as in 1977. I am going to tell about periodicity of permutations, their complexity according to several definitions and their automatic properties, that is, about usual parameters of words, now extended to permutations and behaving sometimes similarly to those for words, sometimes not. Another series of results concerns permutations generated by infinite words and their properties. Although this direction of research is young, many people, including two other speakers of this meeting, have participated in it, and I believe that several more topics for further study are really promising.

  3. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  4. Random defect lines in conformal minimal models

    International Nuclear Information System (INIS)

    Jeng, M.; Ludwig, A.W.W.

    2001-01-01

    We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

  5. From topological strings to minimal models

    International Nuclear Information System (INIS)

    Foda, Omar; Wu, Jian-Feng

    2015-01-01

    We glue four refined topological vertices to obtain the building block of 5D U(2) quiver instanton partition functions. We take the 4D limit of the result to obtain the building block of 4D instanton partition functions which, using the AGT correspondence, are identified with Virasoro conformal blocks. We show that there is a choice of the parameters of the topological vertices that we start with, as well as the parameters and the intermediate states involved in the gluing procedure, such that we obtain Virasoro minimal model conformal blocks.

  6. From topological strings to minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Foda, Omar [School of Mathematics and Statistics, University of Melbourne,Royal Parade, Parkville, VIC 3010 (Australia); Wu, Jian-Feng [Department of Mathematics and Statistics, Henan University,Minglun Street, Kaifeng city, Henan (China); Beijing Institute of Theoretical Physics and Mathematics,3rd Shangdi Street, Beijing (China)

    2015-07-24

    We glue four refined topological vertices to obtain the building block of 5D U(2) quiver instanton partition functions. We take the 4D limit of the result to obtain the building block of 4D instanton partition functions which, using the AGT correspondence, are identified with Virasoro conformal blocks. We show that there is a choice of the parameters of the topological vertices that we start with, as well as the parameters and the intermediate states involved in the gluing procedure, such that we obtain Virasoro minimal model conformal blocks.

  7. Flocking with minimal cooperativity: the panic model.

    Science.gov (United States)

    Pilkiewicz, Kevin R; Eaves, Joel D

    2014-01-01

    We present a two-dimensional lattice model of self-propelled spins that can change direction only upon collision with another spin. We show that even with ballistic motion and minimal cooperativity, these spins display robust flocking behavior at nearly all densities, forming long bands of stripes. The structural transition in this system is not a thermodynamic phase transition, but it can still be characterized by an order parameter, and we demonstrate that if this parameter is studied as a dynamical variable rather than a steady-state observable, we can extract a detailed picture of how the flocking mechanism varies with density.

  8. Minimal models for axion and neutrino

    Directory of Open Access Journals (Sweden)

    Y.H. Ahn

    2016-01-01

    Full Text Available The PQ mechanism resolving the strong CP problem and the seesaw mechanism explaining the smallness of neutrino masses may be related in a way that the PQ symmetry breaking scale and the seesaw scale arise from a common origin. Depending on how the PQ symmetry and the seesaw mechanism are realized, one has different predictions on the color and electromagnetic anomalies which could be tested in the future axion dark matter search experiments. Motivated by this, we construct various PQ seesaw models which are minimally extended from the (non- supersymmetric Standard Model and thus set up different benchmark points on the axion–photon–photon coupling in comparison with the standard KSVZ and DFSZ models.

  9. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  10. PERMutation Using Transposase Engineering (PERMUTE): A Simple Approach for Constructing Circularly Permuted Protein Libraries.

    Science.gov (United States)

    Jones, Alicia M; Atkinson, Joshua T; Silberg, Jonathan J

    2017-01-01

    Rearrangements that alter the order of a protein's sequence are used in the lab to study protein folding, improve activity, and build molecular switches. One of the simplest ways to rearrange a protein sequence is through random circular permutation, where native protein termini are linked together and new termini are created elsewhere through random backbone fission. Transposase mutagenesis has emerged as a simple way to generate libraries encoding different circularly permuted variants of proteins. With this approach, a synthetic transposon (called a permuteposon) is randomly inserted throughout a circularized gene to generate vectors that express different permuted variants of a protein. In this chapter, we outline the protocol for constructing combinatorial libraries of circularly permuted proteins using transposase mutagenesis, and we describe the different permuteposons that have been developed to facilitate library construction.

  11. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  12. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  13. Permutation-invariant distance between atomic configurations

    Science.gov (United States)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-09-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity.

  14. Permutation-invariant distance between atomic configurations

    International Nuclear Information System (INIS)

    Ferré, Grégoire; Maillet, Jean-Bernard; Stoltz, Gabriel

    2015-01-01

    We present a permutation-invariant distance between atomic configurations, defined through a functional representation of atomic positions. This distance enables us to directly compare different atomic environments with an arbitrary number of particles, without going through a space of reduced dimensionality (i.e., fingerprints) as an intermediate step. Moreover, this distance is naturally invariant through permutations of atoms, avoiding the time consuming associated minimization required by other common criteria (like the root mean square distance). Finally, the invariance through global rotations is accounted for by a minimization procedure in the space of rotations solved by Monte Carlo simulated annealing. A formal framework is also introduced, showing that the distance we propose verifies the property of a metric on the space of atomic configurations. Two examples of applications are proposed. The first one consists in evaluating faithfulness of some fingerprints (or descriptors), i.e., their capacity to represent the structural information of a configuration. The second application concerns structural analysis, where our distance proves to be efficient in discriminating different local structures and even classifying their degree of similarity

  15. AGT, Burge pairs and minimal models

    International Nuclear Information System (INIS)

    Bershtein, M.; Foda, O.

    2014-01-01

    We consider the AGT correspondence in the context of the conformal field theory M p,p ′ ⊗M H , where M p,p ′ is the minimal model based on the Virasoro algebra V p,p ′ labeled by two co-prime integers {p,p ′ }, 1

  16. AGT, Burge pairs and minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Bershtein, M. [Landau Institute for Theoretical Physics,Chernogolovka (Russian Federation); Institute for Information Transmission Problems,Moscow (Russian Federation); National Research University Higher School of Economics, International Laboratory of Representation Theory and Mathematical Physics, Independent University of Moscow, Moscow (Russian Federation); Foda, O. [Mathematics and Statistics, University of Melbourne,Parkville, VIC 3010 (Australia)

    2014-06-30

    We consider the AGT correspondence in the context of the conformal field theory M{sup p,p{sup ′}}⊗M{sup H}, where M{sup p,p{sup ′}} is the minimal model based on the Virasoro algebra V{sup p,p{sup ′}} labeled by two co-prime integers {p,p"′}, 1

  17. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  18. On Permuting Cut with Contraction

    OpenAIRE

    Borisavljevic, Mirjana; Dosen, Kosta; Petric, Zoran

    1999-01-01

    The paper presents a cut-elimination procedure for intuitionistic propositional logic in which cut is eliminated directly, without introducing the multiple-cut rule mix, and in which pushing cut above contraction is one of the reduction steps. The presentation of this procedure is preceded by an analysis of Gentzen's mix-elimination procedure, made in the perspective of permuting cut with contraction. It is also shown that in the absence of implication, pushing cut above contraction doesn't p...

  19. Sorting permutations by prefix and suffix rearrangements.

    Science.gov (United States)

    Lintzmayer, Carla Negri; Fertin, Guillaume; Dias, Zanoni

    2017-02-01

    Some interesting combinatorial problems have been motivated by genome rearrangements, which are mutations that affect large portions of a genome. When we represent genomes as permutations, the goal is to transform a given permutation into the identity permutation with the minimum number of rearrangements. When they affect segments from the beginning (respectively end) of the permutation, they are called prefix (respectively suffix) rearrangements. This paper presents results for rearrangement problems that involve prefix and suffix versions of reversals and transpositions considering unsigned and signed permutations. We give 2-approximation and ([Formula: see text])-approximation algorithms for these problems, where [Formula: see text] is a constant divided by the number of breakpoints (pairs of consecutive elements that should not be consecutive in the identity permutation) in the input permutation. We also give bounds for the diameters concerning these problems and provide ways of improving the practical results of our algorithms.

  20. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  1. The minimal extension of the Standard Model with S3 symmetry

    International Nuclear Information System (INIS)

    Lee, C.E.; Lin, C.; Yang, Y.W.

    1991-01-01

    In this paper the two Higgs-doublet extension of the standard electroweak model with S 3 symmetry is presented. The flavour changing neutral Higgs interaction are automatically absent. A permutation symmetry breaking scheme is discussed. The correction to the Bjorken's approximation and the CP-violation factor J are given within this scheme

  2. Biophysically realistic minimal model of dopamine neuron

    Science.gov (United States)

    Oprisan, Sorinel

    2008-03-01

    We proposed and studied a new biophysically relevant computational model of dopaminergic neurons. Midbrain dopamine neurons are involved in motivation and the control of movement, and have been implicated in various pathologies such as Parkinson's disease, schizophrenia, and drug abuse. The model we developed is a single-compartment Hodgkin-Huxley (HH)-type parallel conductance membrane model. The model captures the essential mechanisms underlying the slow oscillatory potentials and plateau potential oscillations. The main currents involved are: 1) a voltage-dependent fast calcium current, 2) a small conductance potassium current that is modulated by the cytosolic concentration of calcium, and 3) a slow voltage-activated potassium current. We developed multidimensional bifurcation diagrams and extracted the effective domains of sustained oscillations. The model includes a calcium balance due to the fundamental importance of calcium influx as proved by simultaneous electrophysiological and calcium imaging procedure. Although there are significant evidences to suggest a partially electrogenic calcium pump, all previous models considered only elecrtogenic pumps. We investigated the effect of the electrogenic calcium pump on the bifurcation diagram of the model and compared our findings against the experimental results.

  3. Spontaneous parity violation and minimal Higgs models

    International Nuclear Information System (INIS)

    Chavez, H.; Martins Simoes, J.A.

    2007-01-01

    In this paper we present a model for the spontaneous breaking of parity with two Higgs doublets and two neutral Higgs singlets which are even and odd under D-parity. The condition υ R >>υ L can be satisfied without introducing bidoublets, and it is induced by the breaking of D-parity through the vacuum expectation value of the odd Higgs singlet. Examples of left-right symmetric and mirror fermions models in grand unified theories are presented. (orig.)

  4. Minimal composite Higgs models at the LHC

    Science.gov (United States)

    Carena, Marcela; Da Rold, Leandro; Pontón, Eduardo

    2014-06-01

    We consider composite Higgs models where the Higgs is a pseudo-Nambu Goldstone boson arising from the spontaneous breaking of an approximate global symmetry by some underlying strong dynamics. We focus on the SO(5) → SO(4) symmetry breaking pattern, assuming the "partial compositeness" paradigm. We study the consequences on Higgs physics of the fermionic representations produced by the strong dynamics, that mix with the Standard Model (SM) degrees of freedom. We consider models based on the lowest-dimensional representations of SO(5) that allow for the custodial protection of the coupling, i.e. the 5, 10 and 14. We find a generic suppression of the gluon fusion process, while the Higgs branching fractions can be enhanced or suppressed compared to the SM. Interestingly, a precise measurement of the Higgs boson couplings can distinguish between different realizations in the fermionic sector, thus providing crucial information about the nature of the UV dynamics.

  5. Minimal composite Higgs models at the LHC

    International Nuclear Information System (INIS)

    Carena, Marcela; Rold, Leandro Da; Pontón, Eduardo

    2014-01-01

    We consider composite Higgs models where the Higgs is a pseudo-Nambu Goldstone boson arising from the spontaneous breaking of an approximate global symmetry by some underlying strong dynamics. We focus on the SO(5)→SO(4) symmetry breaking pattern, assuming the “partial compositeness" paradigm. We study the consequences on Higgs physics of the fermionic representations produced by the strong dynamics, that mix with the Standard Model (SM) degrees of freedom. We consider models based on the lowest-dimensional representations of SO(5) that allow for the custodial protection of the Zb-barb coupling, i.e. the 5, 10 and 14. We find a generic suppression of the gluon fusion process, while the Higgs branching fractions can be enhanced or suppressed compared to the SM. Interestingly, a precise measurement of the Higgs boson couplings can distinguish between different realizations in the fermionic sector, thus providing crucial information about the nature of the UV dynamics.

  6. A minimal physical model for crawling cells

    Science.gov (United States)

    Tiribocchi, Adriano; Tjhung, Elsen; Marenduzzo, Davide; Cates, Michael E.

    Cell motility in higher organisms (eukaryotes) is fundamental to biological functions such as wound healing or immune response, and is also implicated in diseases such as cancer. For cells crawling on solid surfaces, considerable insights into motility have been gained from experiments replicating such motion in vitro. Such experiments show that crawling uses a combination of actin treadmilling (polymerization), which pushes the front of a cell forward, and myosin-induced stress (contractility), which retracts the rear. We present a simplified physical model of a crawling cell, consisting of a droplet of active polar fluid with contractility throughout, but treadmilling connected to a thin layer near the supporting wall. The model shows a variety of shapes and/or motility regimes, some closely resembling cases seen experimentally. Our work supports the view that cellular motility exploits autonomous physical mechanisms whose operation does not need continuous regulatory effort.

  7. Strong Sector in non-minimal SUSY model

    Directory of Open Access Journals (Sweden)

    Costantini Antonio

    2016-01-01

    Full Text Available We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.

  8. The dispersionless Lax equations and topological minimal models

    International Nuclear Information System (INIS)

    Krichever, I.

    1992-01-01

    It is shown that perturbed rings of the primary chiral fields of the topological minimal models coincide with some particular solutions of the dispersionless Lax equations. The exact formulae for the tree level partition functions, of A n topological minimal models are found. The Virasoro constraints for the analogue of the τ-function of the dispersionless Lax equation corresponding to these models are proved. (orig.)

  9. The Structure of a Thermophilic Kinase Shapes Fitness upon Random Circular Permutation.

    Science.gov (United States)

    Jones, Alicia M; Mehta, Manan M; Thomas, Emily E; Atkinson, Joshua T; Segall-Shapiro, Thomas H; Liu, Shirley; Silberg, Jonathan J

    2016-05-20

    Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement in which native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein's functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AKs with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and it reveals a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection.

  10. Non-minimal supersymmetric models. LHC phenomenolgy and model discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Krauss, Manuel Ernst

    2015-12-18

    It is generally agreed upon the fact that the Standard Model of particle physics can only be viewed as an effective theory that needs to be extended as it leaves some essential questions unanswered. The exact realization of the necessary extension is subject to discussion. Supersymmetry is among the most promising approaches to physics beyond the Standard Model as it can simultaneously solve the hierarchy problem and provide an explanation for the dark matter abundance in the universe. Despite further virtues like gauge coupling unification and radiative electroweak symmetry breaking, minimal supersymmetric models cannot be the ultimate answer to the open questions of the Standard Model as they still do not incorporate neutrino masses and are besides heavily constrained by LHC data. This does, however, not derogate the beauty of the concept of supersymmetry. It is therefore time to explore non-minimal supersymmetric models which are able to close these gaps, review their consistency, test them against experimental data and provide prospects for future experiments. The goal of this thesis is to contribute to this process by exploring an extraordinarily well motivated class of models which bases upon a left-right symmetric gauge group. While relaxing the tension with LHC data, those models automatically include the ingredients for neutrino masses. We start with a left-right supersymmetric model at the TeV scale in which scalar SU(2){sub R} triplets are responsible for the breaking of left-right symmetry as well as for the generation of neutrino masses. Although a tachyonic doubly-charged scalar is present at tree-level in this kind of models, we show by performing the first complete one-loop evaluation that it gains a real mass at the loop level. The constraints on the predicted additional charged gauge bosons are then evaluated using LHC data, and we find that we can explain small excesses in the data of which the current LHC run will reveal if they are actual new

  11. Tensor Permutation Matrices in Finite Dimensions

    OpenAIRE

    Christian, Rakotonirina

    2005-01-01

    We have generalised the properties with the tensor product, of one 4x4 matrix which is a permutation matrix, and we call a tensor commutation matrix. Tensor commutation matrices can be constructed with or without calculus. A formula allows us to construct a tensor permutation matrix, which is a generalisation of tensor commutation matrix, has been established. The expression of an element of a tensor commutation matrix has been generalised in the case of any element of a tensor permutation ma...

  12. Permutation importance: a corrected feature importance measure.

    Science.gov (United States)

    Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas

    2010-05-15

    In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.

  13. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  14. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  15. Null-polygonal minimal surfaces in AdS4 from perturbed W minimal models

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Ito, Katsushi; Satoh, Yuji

    2012-11-01

    We study the null-polygonal minimal surfaces in AdS 4 , which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4) 4 /U(1) n-5 generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS 3 case.

  16. Null-polygonal minimal surfaces in AdS{sub 4} from perturbed W minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Hatsuda, Yasuyuki [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ito, Katsushi [Tokyo Institute of Technology (Japan). Dept. of Physics; Satoh, Yuji [Tsukuba Univ., Sakura, Ibaraki (Japan). Inst. of Physics

    2012-11-15

    We study the null-polygonal minimal surfaces in AdS{sub 4}, which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4){sub 4}/U(1){sup n-5} generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS{sub 3} case.

  17. A random regret minimization model of travel choice

    NARCIS (Netherlands)

    Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.

    2008-01-01

    Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel

  18. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  19. Perturbed Yukawa textures in the minimal seesaw model

    Energy Technology Data Exchange (ETDEWEB)

    Rink, Thomas; Schmitz, Kai [Max Planck Institute for Nuclear Physics (MPIK),69117 Heidelberg (Germany)

    2017-03-29

    We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP-violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O(10 %). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.

  20. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    Science.gov (United States)

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  1. Minimal quantization of two-dimensional models with chiral anomalies

    International Nuclear Information System (INIS)

    Ilieva, N.

    1987-01-01

    Two-dimensional gauge models with chiral anomalies - ''left-handed'' QED and the chiral Schwinger model, are quantized consistently in the frames of the minimal quantization method. The choice of the cone time as a physical time for system of quantization is motivated. The well-known mass spectrum is found but with a fixed value of the regularization parameter a=2. Such a unique solution is obtained due to the strong requirement of consistency of the minimal quantization that reflects in the physically motivated choice of the time axis

  2. Two-Higgs-doublet models with Minimal Flavour Violation

    International Nuclear Information System (INIS)

    Carlucci, Maria Valentina

    2010-01-01

    The tree-level flavour-changing neutral currents in the two-Higgs-doublet models can be suppressed by protecting the breaking of either flavour or flavour-blind symmetries, but only the first choice, implemented by the application of the Minimal Flavour Violation hypothesis, is stable under quantum corrections. Moreover, a two-Higgs-doublet model with Minimal Flavour Violation enriched with flavour-blind phases can explain the anomalies recently found in the ΔF = 2 transitions, namely the large CP-violating phase in B s mixing and the tension between ε K and S ψKS .

  3. Model Arrhenius untuk Pendugaan Laju Respirasi Brokoli Terolah Minimal

    Directory of Open Access Journals (Sweden)

    Nurul Imamah

    2016-04-01

    Full Text Available Minimally processed broccoli are perishable product because it still has some metabolism process during the storage period. One of the metabolism process is respiration. Respiration rate is varied depend on the commodity and storage temperature. The purpose of this research are: to review the respiration pattern of minimally processed broccoli during storage period, to study the effect of storage temperature to respiration rate, and to review the correlation between respiration rate and temperature based on Arrhenius model. Broccoli from farming organization “Agro Segar” was processed minimally and then measure the respiration rate. Closed system method is used to measure O2 and CO2 concentration. Minimally processed broccoli is stored at a temperature of 0oC, 5oC, 10oC and 15oC. The experimental design used was completely randomized design of the factors to analyze the rate of respiration. The result shows that broccoli is a climacteric vegetable. It is indicated by the increasing of O2 consumption and CO2 production during senescence phase. The respiration rate increase as high as the increasing of temperature storage. Models Arrhenius can describe correlation between respiration rate and temperature with R2 = 0.953-0.947. The constant value of activation energy (Eai and pre-exponential factor (Roi from Arrhenius model can be used to predict the respiration rate of minimally processed broccoli in every storage temperature

  4. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  5. Aplicación de un algoritmo ACO al problema de taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan An ant colony algorithm for the permutation flowshop with sequence dependent setup times and makespan minimization

    Directory of Open Access Journals (Sweden)

    Eduardo Salazar Hornig

    2011-08-01

    Full Text Available En este trabajo se estudió el problema de secuenciamiento de trabajos en el taller de flujo de permutación con tiempos de preparación dependientes de la secuencia y minimización de makespan. Para ello se propuso un algoritmo de optimización mediante colonia de hormigas (ACO, llevando el problema original a una estructura semejante al problema del vendedor viajero TSP (Traveling Salesman Problem asimétrico, utilizado para su evaluación problemas propuestos en la literatura y se compara con una adaptación de la heurística NEH (Nawaz-Enscore-Ham. Posteriormente se aplica una búsqueda en vecindad a la solución obtenida tanto por ACO como NEH.This paper studied the permutation flowshop with sequence dependent setup times and makespan minimization. An ant colony algorithm which turns the original problem into an asymmetric TSP (Traveling Salesman Problem structure is presented, and applied to problems proposed in the literature and is compared with an adaptation of the NEH heuristic. Subsequently a neighborhood search was applied to the solution obtained by the ACO algorithm and the NEH heuristic.

  6. Charged and neutral minimal supersymmetric standard model Higgs ...

    Indian Academy of Sciences (India)

    physics pp. 759–763. Charged and neutral minimal supersymmetric standard model Higgs boson decays and measurement of tan β at the compact linear collider. E CONIAVITIS and A FERRARI∗. Department of Nuclear and Particle Physics, Uppsala University, 75121 Uppsala, Sweden. ∗E-mail: ferrari@tsl.uu.se. Abstract.

  7. A Minimal Cognitive Model for Translating and Post-editing

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2017-01-01

    This study investigates the coordination of reading (input) and writing (output) activities in from-scratch translation and post-editing. We segment logged eye movements and keylogging data into minimal units of reading and writing activity and model the process of post-editing and from-scratch t...

  8. Lorentz Invariant Spectrum of Minimal Chiral Schwinger Model

    Science.gov (United States)

    Kim, Yong-Wan; Kim, Seung-Kook; Kim, Won-Tae; Park, Young-Jai; Kim, Kee Yong; Kim, Yongduk

    We study the Lorentz transformation of the minimal chiral Schwinger model in terms of the alternative action. We automatically obtain a chiral constraint, which is equivalent to the frame constraint introduced by McCabe, in order to solve the frame problem in phase space. As a result we obtain the Lorentz invariant spectrum in any moving frame by choosing a frame parameter.

  9. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    Science.gov (United States)

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  10. FORMULASI MODEL PERMUTASI SIKLIS DENGAN OBJEK MULTINOMIAL

    Directory of Open Access Journals (Sweden)

    Sukma Adi Perdana

    2016-10-01

    Full Text Available Penelitian ini bertujuan membangun model matematika untuk menghitung jumlah susunan objek dari permutasi siklis yang memiliki objek multinomial. Model yang dibangun dibatasi untuk permutasi siklis yang memiliki objek multinomial dengan minimal ada satu jenis objek beranggotakan tunggal. Pemodelan dilakukan berdasarkan struktur matematika dari permutasi siklis dan permutasi multinomial. Model permutasi siklis yang memiliki objek multinomial telah dirumuskan.   Pembuktian model telah dilakukan melalui validasi struktur serta validasi hasil yang dilakukan dengan cara membandingkan hasil perhitungan model dan hasil pencacahan. Teorema tentang permutasi siklis dengan objek multinomial juga telah dibangun. Kata kunci:  pemodelan , permutasi siklis, permutasi multinomial This study aims at constructing mathematical model to count the number of arrangement of objects form cyclical permutation that has multinomial objects. The model constructed is limited to cyclical permutation that has multinomial object in which at least one kind of object having single cardinality is contained within. Modelling is undertaken based on mathematical structure of cyclical permutation and multinomial permutation. Cyclical permutation model having multinomial object has been formulated as . The proof of the model has been undertaken by validating structure and validating the outcome which was conducted by comparing counting result of model and counting result manually. The theorem of cyclical permutation with multinomial object has also been developed. Keywords: modelling, cyclical permutation, multinomial permutation

  11. Permutation parity machines for neural synchronization

    International Nuclear Information System (INIS)

    Reyes, O M; Kopitzke, I; Zimmermann, K-H

    2009-01-01

    Synchronization of neural networks has been studied in recent years as an alternative to cryptographic applications such as the realization of symmetric key exchange protocols. This paper presents a first view of the so-called permutation parity machine, an artificial neural network proposed as a binary variant of the tree parity machine. The dynamics of the synchronization process by mutual learning between permutation parity machines is analytically studied and the results are compared with those of tree parity machines. It will turn out that for neural synchronization, permutation parity machines form a viable alternative to tree parity machines

  12. The electroweak phase transition in minimal supergravity models

    CERN Document Server

    Nanopoulos, Dimitri V

    1994-01-01

    We have explored the electroweak phase transition in minimal supergravity models by extending previous analysis of the one-loop Higgs potential to include finite temperature effects. Minimal supergravity is characterized by two higgs doublets at the electroweak scale, gauge coupling unification, and universal soft-SUSY breaking at the unification scale. We have searched for the allowed parameter space that avoids washout of baryon number via unsuppressed anomalous Electroweak sphaleron processes after the phase transition. This requirement imposes strong constraints on the Higgs sector. With respect to weak scale baryogenesis, we find that the generic MSSM is {\\it not} phenomenologically acceptable, and show that the additional experimental and consistency constraints of minimal supergravity restricts the mass of the lightest CP-even Higgs even further to $m_h\\lsim 32\\GeV$ (at one loop), also in conflict with experiment. Thus, if supergravity is to allow for baryogenesis via any other mechanism above the weak...

  13. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  14. Permutation parity machines for neural cryptography.

    Science.gov (United States)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-06-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  15. Permutation parity machines for neural cryptography

    International Nuclear Information System (INIS)

    Reyes, Oscar Mauricio; Zimmermann, Karl-Heinz

    2010-01-01

    Recently, synchronization was proved for permutation parity machines, multilayer feed-forward neural networks proposed as a binary variant of the tree parity machines. This ability was already used in the case of tree parity machines to introduce a key-exchange protocol. In this paper, a protocol based on permutation parity machines is proposed and its performance against common attacks (simple, geometric, majority and genetic) is studied.

  16. Finite Cycle Gibbs Measures on Permutations of

    Science.gov (United States)

    Armendáriz, Inés; Ferrari, Pablo A.; Groisman, Pablo; Leonardi, Florencia

    2015-03-01

    We consider Gibbs distributions on the set of permutations of associated to the Hamiltonian , where is a permutation and is a strictly convex potential. Call finite-cycle those permutations composed by finite cycles only. We give conditions on ensuring that for large enough temperature there exists a unique infinite volume ergodic Gibbs measure concentrating mass on finite-cycle permutations; this measure is equal to the thermodynamic limit of the specifications with identity boundary conditions. We construct as the unique invariant measure of a Markov process on the set of finite-cycle permutations that can be seen as a loss-network, a continuous-time birth and death process of cycles interacting by exclusion, an approach proposed by Fernández, Ferrari and Garcia. Define as the shift permutation . In the Gaussian case , we show that for each , given by is an ergodic Gibbs measure equal to the thermodynamic limit of the specifications with boundary conditions. For a general potential , we prove the existence of Gibbs measures when is bigger than some -dependent value.

  17. ATLAS Z Excess in Minimal Supersymmetric Standard Model

    International Nuclear Information System (INIS)

    Lu, Xiaochuan; Terada, Takahiro

    2015-06-01

    Recently the ATLAS collaboration reported a 3 sigma excess in the search for the events containing a dilepton pair from a Z boson and large missing transverse energy. Although the excess is not sufficiently significant yet, it is quite tempting to explain this excess by a well-motivated model beyond the standard model. In this paper we study a possibility of the minimal supersymmetric standard model (MSSM) for this excess. Especially, we focus on the MSSM spectrum where the sfermions are heavier than the gauginos and Higgsinos. We show that the excess can be explained by the reasonable MSSM mass spectrum.

  18. Toda theories, W-algebras, and minimal models

    International Nuclear Information System (INIS)

    Mansfield, P.; Spence, B.

    1991-01-01

    We discuss the classical W-algebra symmetries of Toda field theories in terms of the pseudo-differential Lax operator associated with the Toda Lax pair. We then show how the W-algebra transformations can be understood as the non-abelian gauge transformations which preserve the form of the Lax pair. This provides a new understanding of the W-algebras, and we discuss their closure and co-cycle structure using this approach. The quantum Lax operator is investigated, and we show that this operator, which generates the quantum W-algebra currents, is conserved in the conformally extended Toda theories. The W-algebra minimal model primary fields are shown to arise naturally in these theories, leading to the conjecture that the conformally extended Toda theories provide a lagrangian formulation of the W-algebra minimal models. (orig.)

  19. On relevant boundary perturbations of unitary minimal models

    International Nuclear Information System (INIS)

    Recknagel, A.; Roggenkamp, D.; Schomerus, V.

    2000-01-01

    We consider unitary Virasoro minimal models on the disk with Cardy boundary conditions and discuss deformations by certain relevant boundary operators, analogous to tachyon condensation in string theory. Concentrating on the least relevant boundary field, we can perform a perturbative analysis of renormalization group fixed points. We find that the systems always flow towards stable fixed points which admit no further (non-trivial) relevant perturbations. The new conformal boundary conditions are in general given by superpositions of 'pure' Cardy boundary conditions

  20. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  1. Phenomenological study of in the minimal model at LHC

    Indian Academy of Sciences (India)

    K M Balasubramaniam

    2017-10-05

    Oct 5, 2017 ... Phenomenological study of Z in the minimal B − L model at LHC ... The phenomenological study of neutral heavy gauge boson (Z. B−L) of the ...... JHEP10(2015)076, arXiv:1506.06767 [hep-ph] ... [15] ATLAS Collaboration: G Aad et al, Phys. Rev. D 90(5) ... [19] C W Chiang, N D Christensen, G J Ding and T.

  2. A hybrid approach for minimizing makespan in permutation flowshop scheduling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Balasundaram, R.; Baskar, N.

    2017-01-01

    This work proposes a hybrid approach for solving traditional flowshop scheduling problems to reduce the makespan (total completion time). To solve scheduling problems, a combination of Decision Tree (DT) and Scatter Search (SS) algorithms are used. Initially, the DT is used to generate a seed...... solution which is then given input to the SS to obtain optimal / near optimal solutions of makespan. The DT used the entropy function to convert the given problem into a tree structured format / set of rules. The SS provides an extensive investigation of the search space through diversification...

  3. On radiative gauge symmetry breaking in the minimal supersymmetric model

    International Nuclear Information System (INIS)

    Gamberini, G.; Ridolfi, G.; Zwirner, F.

    1990-01-01

    We present a critical reappraisal of radiative gauge symmetry breaking in the minimal supersymmetric standard model. We show that a naive use of the renormalization group improved tree-level potential can lead to incorrect conclusions. We specify the conditions under which the above method gives reliable results, by performing a comparison with the results obtained from the full one-loop potential. We also point out how the stability constraint and the conditions for the absence of charge- and colour-breaking minima should be applied. Finally, we comment on the uncertainties affecting the model predictions for physical observables, in particular for the top quark mass. (orig.)

  4. Predictions for mt and MW in minimal supersymmetric models

    International Nuclear Information System (INIS)

    Buchmueller, O.; Ellis, J.R.; Flaecher, H.; Isidori, G.

    2009-12-01

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m t , and the W boson mass, m W . We find that the supersymmetric predictions for both m t and m W , obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  5. The minimal linear σ model for the Goldstone Higgs

    International Nuclear Information System (INIS)

    Feruglio, F.; Gavela, M.B.; Kanshin, K.; Machado, P.A.N.; Rigolin, S.; Saa, S.

    2016-01-01

    In the context of the minimal SO(5) linear σ-model, a complete renormalizable Lagrangian -including gauge bosons and fermions- is considered, with the symmetry softly broken to SO(4). The scalar sector describes both the electroweak Higgs doublet and the singlet σ. Varying the σ mass would allow to sweep from the regime of perturbative ultraviolet completion to the non-linear one assumed in models in which the Higgs particle is a low-energy remnant of some strong dynamics. We analyze the phenomenological implications and constraints from precision observables and LHC data. Furthermore, we derive the d≤6 effective Lagrangian in the limit of heavy exotic fermions.

  6. Newton's constant from a minimal length: additional models

    International Nuclear Information System (INIS)

    Sahlmann, Hanno

    2011-01-01

    We follow arguments of Verlinde (2010 arXiv:1001.0785 [hep-th]) and Klinkhamer (2010 arXiv:1006.2094 [hep-th]), and construct two models of the microscopic theory of a holographic screen that allow for the thermodynamical derivation of Newton's law, with Newton's constant expressed in terms of a minimal length scale l contained in the area spectrum of the microscopic theory. One of the models is loosely related to the quantum structure of surfaces and isolated horizons in loop quantum gravity. Our investigation shows that the conclusions reached by Klinkhamer regarding the new length scale l seem to be generic in all their qualitative aspects.

  7. Minimal extension of the standard model scalar sector

    International Nuclear Information System (INIS)

    O'Connell, Donal; Wise, Mark B.; Ramsey-Musolf, Michael J.

    2007-01-01

    The minimal extension of the scalar sector of the standard model contains an additional real scalar field with no gauge quantum numbers. Such a field does not couple to the quarks and leptons directly but rather through its mixing with the standard model Higgs field. We examine the phenomenology of this model focusing on the region of parameter space where the new scalar particle is significantly lighter than the usual Higgs scalar and has small mixing with it. In this region of parameter space most of the properties of the additional scalar particle are independent of the details of the scalar potential. Furthermore the properties of the scalar that is mostly the standard model Higgs can be drastically modified since its dominant branching ratio may be to a pair of the new lighter scalars

  8. Viability of minimal left–right models with discrete symmetries

    Directory of Open Access Journals (Sweden)

    Wouter Dekens

    2014-12-01

    Full Text Available We provide a systematic study of minimal left–right models that are invariant under P, C, and/or CP transformations. Due to the high amount of symmetry such models are quite predictive in the amount and pattern of CP violation they can produce or accommodate at lower energies. Using current experimental constraints some of the models can already be excluded. For this purpose we provide an overview of the experimental constraints on the different left–right symmetric models, considering bounds from colliders, meson-mixing and low-energy observables, such as beta decay and electric dipole moments. The features of the various Yukawa and Higgs sectors are discussed in detail. In particular, we give the Higgs potentials for each case, discuss the possible vacua and investigate the amount of fine-tuning present in these potentials. It turns out that all left–right models with P, C, and/or CP symmetry have a high degree of fine-tuning, unless supplemented with mechanisms to suppress certain parameters. The models that are symmetric under both P and C are not in accordance with present observations, whereas the models with either P, C, or CP symmetry cannot be excluded by data yet. To further constrain and discriminate between the models measurements of B-meson observables at LHCb and B-factories will be especially important, while measurements of the EDMs of light nuclei in particular could provide complementary tests of the LRMs.

  9. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  10. Phenomenology of non-minimal supersymmetric models at linear colliders

    International Nuclear Information System (INIS)

    Porto, Stefano

    2015-06-01

    The focus of this thesis is on the phenomenology of several non-minimal supersymmetric models in the context of future linear colliders (LCs). Extensions of the minimal supersymmetric Standard Model (MSSM) may accommodate the observed Higgs boson mass at about 125 GeV in a more natural way than the MSSM, with a richer phenomenology. We consider both F-term extensions of the MSSM, as for instance the non-minimal supersymmetric Standard Model (NMSSM), as well as D-terms extensions arising at low energies from gauge extended supersymmetric models. The NMSSM offers a solution to the μ-problem with an additional gauge singlet supermultiplet. The enlarged neutralino sector of the NMSSM can be accurately studied at a LC and used to distinguish the model from the MSSM. We show that exploiting the power of the polarised beams of a LC can be used to reconstruct the neutralino and chargino sector and eventually distinguish the NMSSM even considering challenging scenarios that resemble the MSSM. Non-decoupling D-terms extensions of the MSSM can raise the tree-level Higgs mass with respect to the MSSM. This is done through additional contributions to the Higgs quartic potential, effectively generated by an extended gauge group. We study how this can happen and we show how these additional non-decoupling D-terms affect the SM-like Higgs boson couplings to fermions and gauge bosons. We estimate how the deviations from the SM couplings can be spotted at the Large Hadron Collider (LHC) and at the International Linear Collider (ILC), showing how the ILC would be suitable for the model identication. Since our results prove that a linear collider is a fundamental machine for studying supersymmetry phenomenology at a high level of precision, we argue that also a thorough comprehension of the physics at the interaction point (IP) of a LC is needed. Therefore, we finally consider the possibility of observing intense electromagnetic field effects and nonlinear quantum electrodynamics

  11. On non-permutation solutions to some two machine flow shop scheduling problems

    NARCIS (Netherlands)

    V. Strusevich (Vitaly); P.J. Zwaneveld (Peter)

    1994-01-01

    textabstractIn this paper, we study two versions of the two machine flow shop scheduling problem, where schedule length is to be minimized. First, we consider the two machine flow shop with setup, processing, and removal times separated. It is shown that an optimal solution need not be a permutation

  12. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    Science.gov (United States)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  13. A novel minimal invasive mouse model of extracorporeal circulation.

    Science.gov (United States)

    Luo, Shuhua; Tang, Menglin; Du, Lei; Gong, Lina; Xu, Jin; Chen, Youwen; Wang, Yabo; Lin, Ke; An, Qi

    2015-01-01

    Extracorporeal circulation (ECC) is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n = 20) survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury.

  14. A Novel Minimal Invasive Mouse Model of Extracorporeal Circulation

    Directory of Open Access Journals (Sweden)

    Shuhua Luo

    2015-01-01

    Full Text Available Extracorporeal circulation (ECC is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n=20 survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury.

  15. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  16. Sorting signed permutations by short operations.

    Science.gov (United States)

    Galvão, Gustavo Rodrigues; Lee, Orlando; Dias, Zanoni

    2015-01-01

    During evolution, global mutations may alter the order and the orientation of the genes in a genome. Such mutations are referred to as rearrangement events, or simply operations. In unichromosomal genomes, the most common operations are reversals, which are responsible for reversing the order and orientation of a sequence of genes, and transpositions, which are responsible for switching the location of two contiguous portions of a genome. The problem of computing the minimum sequence of operations that transforms one genome into another - which is equivalent to the problem of sorting a permutation into the identity permutation - is a well-studied problem that finds application in comparative genomics. There are a number of works concerning this problem in the literature, but they generally do not take into account the length of the operations (i.e. the number of genes affected by the operations). Since it has been observed that short operations are prevalent in the evolution of some species, algorithms that efficiently solve this problem in the special case of short operations are of interest. In this paper, we investigate the problem of sorting a signed permutation by short operations. More precisely, we study four flavors of this problem: (i) the problem of sorting a signed permutation by reversals of length at most 2; (ii) the problem of sorting a signed permutation by reversals of length at most 3; (iii) the problem of sorting a signed permutation by reversals and transpositions of length at most 2; and (iv) the problem of sorting a signed permutation by reversals and transpositions of length at most 3. We present polynomial-time solutions for problems (i) and (iii), a 5-approximation for problem (ii), and a 3-approximation for problem (iv). Moreover, we show that the expected approximation ratio of the 5-approximation algorithm is not greater than 3 for random signed permutations with more than 12 elements. Finally, we present experimental results that show

  17. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  18. 1-Colored Archetypal Permutations and Strings of Degree n

    Directory of Open Access Journals (Sweden)

    Gheorghe Eduard Tara

    2012-10-01

    Full Text Available New notions related to permutations are introduced here. We present the string of a 1-colored permutation as a closed planar curve, the fundamental 1-colored permutation as an equivalence class related to the equivalence in strings of the 1-colored permutations. We give formulas for the number of the 1-colored archetypal permutations of degree n. We establish an algorithm to identify the 1- colored archetypal permutations of degree n and we present the atlas of the 1-colored archetypal strings of degree n, n ≤ 7, based on this algorithm.

  19. Neutron electric dipole moment in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Inui, T.; Mimura, Y.; Sakai, N.; Sasaki, T.

    1995-01-01

    The neutron electric dipole moment (EDM) due to the single quark EDM and to the transition EDM is calculated in the minimal supersymmetric standard model. Assuming that the Cabibbo-Kobayashi-Maskawa matrix at the grand unification scale is the only source of CP violation, complex phases are induced in the parameters of soft supersymmetry breaking at low energies. The chargino one-loop diagram is found to give the dominant contribution of the order of 10 -27 similar 10 -29 e.cm for the quark EDM, assuming the light chargino mass and the universal scalar mass to be 50 GeV and 100 GeV, respectively. Therefore the neutron EDM in this class of model is difficult to measure experimentally. The gluino one-loop diagram also contributes due to the flavor changing gluino coupling. The transition EDM is found to give dominant contributions for certain parameter regions. (orig.)

  20. Permutation Tests for Stochastic Ordering and ANOVA

    CERN Document Server

    Basso, Dario; Salmaso, Luigi; Solari, Aldo

    2009-01-01

    Permutation testing for multivariate stochastic ordering and ANOVA designs is a fundamental issue in many scientific fields such as medicine, biology, pharmaceutical studies, engineering, economics, psychology, and social sciences. This book presents advanced methods and related R codes to perform complex multivariate analyses

  1. N ecklaces~ Periodic Points and Permutation Representations

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 11. Necklaces, Periodic Points and Permutation Representations - Fermat's Little Theorem. Somnath Basu Anindita Bose Sumit Kumar Sinha Pankaj Vishe. General Article Volume 6 Issue 11 November 2001 pp 18-26 ...

  2. A minimal model of predator-swarm interactions.

    Science.gov (United States)

    Chen, Yuxin; Kolokolnikov, Theodore

    2014-05-06

    We propose a minimal model of predator-swarm interactions which captures many of the essential dynamics observed in nature. Different outcomes are observed depending on the predator strength. For a 'weak' predator, the swarm is able to escape the predator completely. As the strength is increased, the predator is able to catch up with the swarm as a whole, but the individual prey is able to escape by 'confusing' the predator: the prey forms a ring with the predator at the centre. For higher predator strength, complex chasing dynamics are observed which can become chaotic. For even higher strength, the predator is able to successfully capture the prey. Our model is simple enough to be amenable to a full mathematical analysis, which is used to predict the shape of the swarm as well as the resulting predator-prey dynamics as a function of model parameters. We show that, as the predator strength is increased, there is a transition (owing to a Hopf bifurcation) from confusion state to chasing dynamics, and we compute the threshold analytically. Our analysis indicates that the swarming behaviour is not helpful in avoiding the predator, suggesting that there are other reasons why the species may swarm. The complex shape of the swarm in our model during the chasing dynamics is similar to the shape of a flock of sheep avoiding a shepherd.

  3. Neutral current in reduced minimal 3-3-1 model

    International Nuclear Information System (INIS)

    Vu Thi Ngoc Huyen; Hoang Ngoc Long; Tran Thanh Lam; Vo Quoc Phong

    2014-01-01

    This work is devoted for gauge boson sector of the recently proposed model based on SU(3) C ⊗SU(3) L ⊗ U(1) X group with minimal content of leptons and Higgs. The limits on the masses of the bilepton gauge bosons and on the mixing angle among the neutral ones are deduced. Using the Fritzsch anzats on quark mixing, we show that the third family of quarks should be different from the first two. We obtain a lower bound on mass of the new heavy neutral gauge boson as 4.032 TeV. Using data on branching decay rates of the Z boson, we can fix the limit to the Z and Z' mixing angle φ as - 0.001 ≤ φ ≤ 0.0003. (author)

  4. The Friedberg-Lee symmetry and minimal seesaw model

    International Nuclear Information System (INIS)

    He Xiaogang; Liao Wei

    2009-01-01

    The Friedberg-Lee (FL) symmetry is generated by a transformation of a fermionic field q to q+ξz. This symmetry puts very restrictive constraints on allowed terms in a Lagrangian. Applying this symmetry to N fermionic fields, we find that the number of independent fields is reduced to N-1 if the fields have gauge interaction or the transformation is a local one. Using this property, we find that a seesaw model originally with three generations of left- and right-handed neutrinos, with the left-handed neutrinos unaffected but the right-handed neutrinos transformed under the local FL translation, is reduced to an effective theory of minimal seesaw which has only two right-handed neutrinos. The symmetry predicts that one of the light neutrino masses must be zero.

  5. Higgs decays to dark matter: Beyond the minimal model

    International Nuclear Information System (INIS)

    Pospelov, Maxim; Ritz, Adam

    2011-01-01

    We examine the interplay between Higgs mediation of dark-matter annihilation and scattering on one hand and the invisible Higgs decay width on the other, in a generic class of models utilizing the Higgs portal. We find that, while the invisible width of the Higgs to dark matter is now constrained for a minimal singlet scalar dark matter particle by experiments such as XENON100, this conclusion is not robust within more generic examples of Higgs mediation. We present a survey of simple dark matter scenarios with m DM h /2 and Higgs portal mediation, where direct-detection signatures are suppressed, while the Higgs width is still dominated by decays to dark matter.

  6. Designing a model to minimize inequities in hemodialysis facilities distribution

    Directory of Open Access Journals (Sweden)

    Teresa M. Salgado

    2011-11-01

    Full Text Available Portugal has an uneven, city-centered bias in the distribution of hemodialysis centers found to contribute to health care inequities. A model has been developed with the aim of minimizing access inequity through the identification of the best possible localization of new hemodialysis facilities. The model was designed under the assumption that individuals from different geographic areas, ceteris paribus, present the same likelihood of requiring hemodialysis in the future. Distances to reach the closest hemodialysis facility were calculated for every municipality lacking one. Regions were scored by aggregating weights of the “individual burden”, defined as the burden for an individual living in a region lacking a hemodialysis center to reach one as often as needed, and the “population burden”, defined as the burden for the total population living in such a region. The model revealed that the average travelling distance for inhabitants in municipalities without a hemodialysis center is 32 km and that 145,551 inhabitants (1.5% live more than 60 min away from a hemodialysis center, while 1,393,770 (13.8% live 30-60 min away. Multivariate analysis showed that the current localization of hemodialysis facilities is associated with major urban areas. The model developed recommends 12 locations for establishing hemodialysis centers that would result in drastically reduced travel for 34 other municipalities, leaving only six (34,800 people with over 60 min of travel. The application of this model should facilitate the planning of future hemodialysis services as it takes into consideration the potential impact of travel time for individuals in need of dialysis, as well as the logistic arrangements required to transport all patients with end-stage renal disease. The model is applicable in any country and health care planners can opt to weigh these two elements differently in the model according to their priorities.

  7. An AdS3 dual for minimal model CFTs

    International Nuclear Information System (INIS)

    Gaberdiel, Matthias R.; Gopakumar, Rajesh

    2011-01-01

    We propose a duality between the 2d W N minimal models in the large N't Hooft limit, and a family of higher spin theories on AdS 3 . The 2d conformal field theories (CFTs) can be described as Wess-Zumino-Witten coset models, and include, for N=2, the usual Virasoro unitary series. The dual bulk theory contains, in addition to the massless higher spin fields, two complex scalars (of equal mass). The mass is directly related to the 't Hooft coupling constant of the dual CFT. We give convincing evidence that the spectra of the two theories match precisely for all values of the 't Hooft coupling. We also show that the renormalization group flows in the 2d CFT agree exactly with the usual AdS/CFT prediction of the gravity theory. Our proposal is in many ways analogous to the Klebanov-Polyakov conjecture for an AdS 4 dual for the singlet sector of large N vector models.

  8. Minimal Z' models: present bounds and early LHC reach

    International Nuclear Information System (INIS)

    Salvioni, Ennio; Zwirner, Fabio; Villadoro, Giovanni

    2009-01-01

    We consider 'minimal' Z' models, whose phenomenology is controlled by only three parameters beyond the Standard Model ones: the Z' mass and two effective coupling constants. They encompass many popular models motivated by grand unification, as well as many arising in other theoretical contexts. This parameterization takes also into account both mass and kinetic mixing effects, which we show to be sizable in some cases. After discussing the interplay between the bounds from electroweak precision tests and recent direct searches at the Tevatron, we extend our analysis to estimate the early LHC discovery potential. We consider a center-of-mass energy from 7 towards 10 TeV and an integrated luminosity from 50 to several hundred pb -1 , taking all existing bounds into account. We find that the LHC will start exploring virgin land in parameter space for M Z' around 700 GeV, with lower masses still excluded by the Tevatron and higher masses still excluded by electroweak precision tests. Increasing the energy up to 10 TeV, the LHC will start probing a wider range of Z' masses and couplings, although several hundred pb -1 will be needed to explore the regions of couplings favored by grand unification and to overcome the Tevatron bounds in the mass region around 250 GeV.

  9. A minimal model for multiple epidemics and immunity spreading.

    Directory of Open Access Journals (Sweden)

    Kim Sneppen

    Full Text Available Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  10. A minimal model for multiple epidemics and immunity spreading.

    Science.gov (United States)

    Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan

    2010-10-18

    Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.

  11. Defects and permutation branes in the Liouville field theory

    DEFF Research Database (Denmark)

    Sarkissian, Gor

    2009-01-01

    The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families.......The defects and permutation branes for the Liouville field theory are considered. By exploiting cluster condition, equations satisfied by permutation branes and defects reflection amplitudes are obtained. It is shown that two types of solutions exist, discrete and continuous families....

  12. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  13. Atomistic minimal model for estimating profile of electrodeposited nanopatterns

    Science.gov (United States)

    Asgharpour Hassankiadeh, Somayeh; Sadeghi, Ali

    2018-06-01

    We develop a computationally efficient and methodologically simple approach to realize molecular dynamics simulations of electrodeposition. Our minimal model takes into account the nontrivial electric field due a sharp electrode tip to perform simulations of the controllable coating of a thin layer on a surface with an atomic precision. On the atomic scale a highly site-selective electrodeposition of ions and charged particles by means of the sharp tip of a scanning probe microscope is possible. A better understanding of the microscopic process, obtained mainly from atomistic simulations, helps us to enhance the quality of this nanopatterning technique and to make it applicable in fabrication of nanowires and nanocontacts. In the limit of screened inter-particle interactions, it is feasible to run very fast simulations of the electrodeposition process within the framework of the proposed model and thus to investigate how the shape of the overlayer depends on the tip-sample geometry and dielectric properties, electrolyte viscosity, etc. Our calculation results reveal that the sharpness of the profile of a nano-scale deposited overlayer is dictated by the normal-to-sample surface component of the electric field underneath the tip.

  14. Permutation on hybrid natural inflation

    Science.gov (United States)

    Carone, Christopher D.; Erlich, Joshua; Ramos, Raymundo; Sher, Marc

    2014-09-01

    We analyze a model of hybrid natural inflation based on the smallest non-Abelian discrete group S3. Leading invariant terms in the scalar potential have an accidental global symmetry that is spontaneously broken, providing a pseudo-Goldstone boson that is identified as the inflaton. The S3 symmetry restricts both the form of the inflaton potential and the couplings of the inflaton field to the waterfall fields responsible for the end of inflation. We identify viable points in the model parameter space. Although the power in tensor modes is small in most of the parameter space of the model, we identify parameter choices that yield potentially observable values of r without super-Planckian initial values of the inflaton field.

  15. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  16. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  17. Stable-label intravenous glucose tolerance test minimal model

    International Nuclear Information System (INIS)

    Avogaro, A.; Bristow, J.D.; Bier, D.M.; Cobelli, C.; Toffolo, G.

    1989-01-01

    The minimal model approach to estimating insulin sensitivity (Sl) and glucose effectiveness in promoting its own disposition at basal insulin (SG) is a powerful tool that has been underutilized given its potential applications. In part, this has been due to its inability to separate insulin and glucose effects on peripheral uptake from their effects on hepatic glucose inflow. Prior enhancements, with radiotracer labeling of the dosage, permit this separation but are unsuitable for use in pregnancy and childhood. In this study, we labeled the intravenous glucose tolerance test (IVGTT) dosage with [6,6- 2 H 2 ]glucose, [2- 2 H]glucose, or both stable isotopically labeled glucose tracers and modeled glucose kinetics in six postabsorptive, nonobese adults. As previously found with the radiotracer model, the tracer-estimated S*l derived from the stable-label IVGTT was greater than Sl in each case except one, and the tracer-estimated SG* was less than SG in each instance. More importantly, however, the stable-label IVGTT estimated each parameter with an average precision of +/- 5% (range 3-9%) compared to average precisions of +/- 74% (range 7-309%) for SG and +/- 22% (range 3-72%) for Sl. In addition, because of the different metabolic fates of the two deuterated tracers, there were minor differences in basal insulin-derived measures of glucose effectiveness, but these differences were negligible for parameters describing insulin-stimulated processes. In conclusion, the stable-label IVGTT is a simple, highly precise means of assessing insulin sensitivity and glucose effectiveness at basal insulin that can be used to measure these parameters in individuals of all ages, including children and pregnant women

  18. Minimal models on Riemann surfaces: The partition functions

    International Nuclear Information System (INIS)

    Foda, O.

    1990-01-01

    The Coulomb gas representation of the A n series of c=1-6/[m(m+1)], m≥3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius) 2 of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.)

  19. Minimal models on Riemann surfaces: The partition functions

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O. (Katholieke Univ. Nijmegen (Netherlands). Inst. voor Theoretische Fysica)

    1990-06-04

    The Coulomb gas representation of the A{sub n} series of c=1-6/(m(m+1)), m{ge}3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius){sup 2} of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.).

  20. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  1. Electroweak precision observables in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Heinemeyer, S.; Hollik, W.; Weiglein, G.

    2006-01-01

    The current status of electroweak precision observables in the Minimal Supersymmetric Standard Model (MSSM) is reviewed. We focus in particular on the W boson mass, M W , the effective leptonic weak mixing angle, sin 2 θ eff , the anomalous magnetic moment of the muon (g-2) μ , and the lightest CP-even MSSM Higgs boson mass, m h . We summarize the current experimental situation and the status of the theoretical evaluations. An estimate of the current theoretical uncertainties from unknown higher-order corrections and from the experimental errors of the input parameters is given. We discuss future prospects for both the experimental accuracies and the precision of the theoretical predictions. Confronting the precision data with the theory predictions within the unconstrained MSSM and within specific SUSY-breaking scenarios, we analyse how well the data are described by the theory. The mSUGRA scenario with cosmological constraints yields a very good fit to the data, showing a clear preference for a relatively light mass scale of the SUSY particles. The constraints on the parameter space from the precision data are discussed, and it is shown that the prospective accuracy at the next generation of colliders will enhance the sensitivity of the precision tests very significantly

  2. Permutation Entropy: New Ideas and Challenges

    Directory of Open Access Journals (Sweden)

    Karsten Keller

    2017-03-01

    Full Text Available Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data.

  3. Application of a minimal glacier model to Hansbreen, Svalbard

    Directory of Open Access Journals (Sweden)

    J. Oerlemans

    2011-01-01

    Full Text Available Hansbreen is a well studied tidewater glacier in the southwestern part of Svalbard, currently about 16 km long. Since the end of the 19th century it has been retreating over a distance of 2.7 km. In this paper the global dynamics of Hansbreen are studied with a minimal glacier model, in which the ice mechanics are strongly parameterised and a simple law for iceberg calving is used. The model is calibrated by reconstructing a climate history in such a way that observed and simulated glacier length match. In addition, the calving law is tuned to reproduce the observed mean calving flux for the period 2000–2008.

    Equilibrium states are studied for a wide range of values of the equilibrium line altitude. The dynamics of the glacier are strongly nonlinear. The height-mass balance feedback and the water depth-calving flux feedback give rise to cusp catastrophes in the system.

    For the present climatic conditions Hansbreen cannot survive. Depending on the imposed climate change scenario, in AD 2100 Hansbreen is predicted to have a length between 10 and 12 km. The corresponding decrease in ice volume (relative to the volume in AD 2000 is 45 to 65%.

    Finally the late-Holocene history of Hansbreen is considered. We quote evidence from dated peat samples that Hansbreen did not exist during the Holocene Climatic Optimum. We speculate that at the end of the mid-Holocene Climatic Optimum Hansbreen could advance because the glacier bed was at least 50 m higher than today, and because the tributary glaciers on the western side may have supplied a significant amount of mass to the main stream. The excavation of the overdeepening and the formation of the shoal at the glacier terminus probably took place during the Little Ice Age.

  4. A complete classification of minimal non-PS-groups

    Indian Academy of Sciences (India)

    Abstract. Let G be a finite group. A subgroup H of G is called s-permutable in G if it permutes with every Sylow subgroup of G, and G is called a PS-group if all minimal subgroups and cyclic subgroups with order 4 of G are s-permutable in G. In this paper, we give a complete classification of finite groups which are not ...

  5. AGT, N-Burge partitions and WN minimal models

    International Nuclear Information System (INIS)

    Belavin, Vladimir; Foda, Omar; Santachiara, Raoul

    2015-01-01

    Let B N,n p, p ′ , H be a conformal block, with n consecutive channels χ ι , ι=1,⋯,n, in the conformal field theory M N p, p ′ × M H , where M N p, p ′ is a W N minimal model, generated by chiral spin-2, ⋯, spin-N currents, and labeled by two co-prime integers p and p ′ , 1

  6. Higgs boson masses in a non-minimal supersymmetric model

    International Nuclear Information System (INIS)

    Tiesi, Alessandro

    2002-01-01

    A study of the neutral Higgs spectrum in a general Z 3 -breaking Next to Minimal Supersymmetric Standard Model (NMSSM) is reported in several significant contexts. Particular attention has been devoted to the upper bound on lightest Higgs boson. In the CP-conserving case we show that the extra terms involved in the general Z 3 -breaking superpotential do not affect the upper bound which remains unchanged: it is ∼ 136 GeV when tan β = 2.7. The Spontaneous CP Violation scenario in the Z 3 -breaking NMSSM can occur at tree-level. When the phases of the fields are small the spectrum shows the lightest Higgs particle to be an almost singlet CP-odd. The second lightest particle, a doublet almost-CP-even state, still manifests the upper bound of the CP-conserving case. When the CP-violating phases are large the lightest particle is a doublet with no definite CP parity and its mass shows the usual upper bound at ∼ 136 GeV. The large number of parameters involved in the effective potential can be significantly reduced in the Infrared Quasi Fixed Point (IRQFP) resulting after solving the Renormalization Group (RG) equations assuming universality for the soft SUSY breaking masses. In the Z 3 -breaking NMSSM, unlike the Z 3 -conserving NMSSM, it is possible to find a Higgs spectrum which is still compatible with both experiment and universality at the unification scale. Because in the IRQFP regime tan β ∼ 1.8 and the stop mixing parameter is reduced then the upper bound on the lightest Higgs boson turns out to be ∼ 121 GeV. This result is compatible with experimental data coming from LEPII and might be one of the next predictions to be tested at hadron collider experiments. (author)

  7. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  8. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  9. Search for Minimal Standard Model and Minimal Supersymmetric Model Higgs Bosons in e+ e- Collisions with the OPAL detector at LEP

    International Nuclear Information System (INIS)

    Ganel, Ofer

    1993-06-01

    When LEP machine was turned on in August 1989, a new era had opened. For the first time, direct, model-independent searches for Higgs boson could be carried out. The Minimal Standard Model Higgs boson is expected to be produced in e + e - collisions via the H o Z o . The Minimal Supersymmetric Model Higgs boson are expected to be produced in the analogous e + e - -> h o Z o process or in pairs via the process e + e - -> h o A o . In this thesis we describe the search for Higgs bosons within the framework of the Minimal Standard Model and the Minimal Supersymmetric Model, using the data accumulated by the OPAL detector at LEP in the 1989, 1990, 1991 and part of the 1992 running periods at and around the Z o pole. An MInimal Supersymmetric Model Higgs boson generator is described as well as its use in several different searches. As a result of this work, the Minimal Standard Model Higgs boson mass is bounded from below by 54.2 GeV/c 2 at 95% C.L. This is, at present, the highest such bound. A novel method of overcoming the m τ and m s dependence of Minimal Supersymmetric Higgs boson production and decay introduced by one-loop radiative corrections is used to obtain model-independent exclusion. The thesis describes also an algorithm for off line identification of calorimeter noise in the OPAL detector. (author)

  10. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  11. MSSM (Minimal Supersymmetric Standard Model) Dark Matter Without Prejudice

    International Nuclear Information System (INIS)

    Gainer, James S.

    2009-01-01

    Recently we examined a large number of points in a 19-dimensional parameter subspace of the CP-conserving MSSM with Minimal Flavor Violation. We determined whether each of these points satisfied existing theoretical, experimental, and observational constraints. Here we discuss the properties of the parameter space points allowed by existing data that are relevant for dark matter searches.

  12. Magic informationally complete POVMs with permutations

    Science.gov (United States)

    Planat, Michel; Gedik, Zafer

    2017-09-01

    Eigenstates of permutation gates are either stabilizer states (for gates in the Pauli group) or magic states, thus allowing universal quantum computation (Planat, Rukhsan-Ul-Haq 2017 Adv. Math. Phys. 2017, 5287862 (doi:10.1155/2017/5287862)). We show in this paper that a subset of such magic states, when acting on the generalized Pauli group, define (asymmetric) informationally complete POVMs. Such informationally complete POVMs, investigated in dimensions 2-12, exhibit simple finite geometries in their projector products and, for dimensions 4 and 8 and 9, relate to two-qubit, three-qubit and two-qutrit contextuality.

  13. Permutation 2-groups I: structure and splitness

    OpenAIRE

    Elgueta, Josep

    2013-01-01

    By a 2-group we mean a groupoid equipped with a weakened group structure. It is called split when it is equivalent to the semidirect product of a discrete 2-group and a one-object 2-group. By a permutation 2-group we mean the 2-group $\\mathbb{S}ym(\\mathcal{G})$ of self-equivalences of a groupoid $\\mathcal{G}$ and natural isomorphisms between them, with the product given by composition of self-equivalences. These generalize the symmetric groups $\\mathsf{S}_n$, $n\\geq 1$, obtained when $\\mathca...

  14. Permutation Entropy for Random Binary Sequences

    Directory of Open Access Journals (Sweden)

    Lingfeng Liu

    2015-12-01

    Full Text Available In this paper, we generalize the permutation entropy (PE measure to binary sequences, which is based on Shannon’s entropy, and theoretically analyze this measure for random binary sequences. We deduce the theoretical value of PE for random binary sequences, which can be used to measure the randomness of binary sequences. We also reveal the relationship between this PE measure with other randomness measures, such as Shannon’s entropy and Lempel–Ziv complexity. The results show that PE is consistent with these two measures. Furthermore, we use PE as one of the randomness measures to evaluate the randomness of chaotic binary sequences.

  15. Young module multiplicities and classifying the indecomposable Young permutation modules

    OpenAIRE

    Gill, Christopher C.

    2012-01-01

    We study the multiplicities of Young modules as direct summands of permutation modules on cosets of Young subgroups. Such multiplicities have become known as the p-Kostka numbers. We classify the indecomposable Young permutation modules, and, applying the Brauer construction for p-permutation modules, we give some new reductions for p-Kostka numbers. In particular we prove that p-Kostka numbers are preserved under multiplying partitions by p, and strengthen a known reduction given by Henke, c...

  16. Permutational distribution of the log-rank statistic under random censorship with applications to carcinogenicity assays.

    Science.gov (United States)

    Heimann, G; Neuhaus, G

    1998-03-01

    In the random censorship model, the log-rank test is often used for comparing a control group with different dose groups. If the number of tumors is small, so-called exact methods are often applied for computing critical values from a permutational distribution. Two of these exact methods are discussed and shown to be incorrect. The correct permutational distribution is derived and studied with respect to its behavior under unequal censoring in the light of recent results proving that the permutational version and the unconditional version of the log-rank test are asymptotically equivalent even under unequal censoring. The log-rank test is studied by simulations of a realistic scenario from a bioassay with small numbers of tumors.

  17. A transposase strategy for creating libraries of circularly permuted proteins.

    Science.gov (United States)

    Mehta, Manan M; Liu, Shirley; Silberg, Jonathan J

    2012-05-01

    A simple approach for creating libraries of circularly permuted proteins is described that is called PERMutation Using Transposase Engineering (PERMUTE). In PERMUTE, the transposase MuA is used to randomly insert a minitransposon that can function as a protein expression vector into a plasmid that contains the open reading frame (ORF) being permuted. A library of vectors that express different permuted variants of the ORF-encoded protein is created by: (i) using bacteria to select for target vectors that acquire an integrated minitransposon; (ii) excising the ensemble of ORFs that contain an integrated minitransposon from the selected vectors; and (iii) circularizing the ensemble of ORFs containing integrated minitransposons using intramolecular ligation. Construction of a Thermotoga neapolitana adenylate kinase (AK) library using PERMUTE revealed that this approach produces vectors that express circularly permuted proteins with distinct sequence diversity from existing methods. In addition, selection of this library for variants that complement the growth of Escherichia coli with a temperature-sensitive AK identified functional proteins with novel architectures, suggesting that PERMUTE will be useful for the directed evolution of proteins with new functions.

  18. SCOPES: steganography with compression using permutation search

    Science.gov (United States)

    Boorboor, Sahar; Zolfaghari, Behrouz; Mozafari, Saadat Pour

    2011-10-01

    LSB (Least Significant Bit) is a widely used method for image steganography, which hides the secret message as a bit stream in LSBs of pixel bytes in the cover image. This paper proposes a variant of LSB named SCOPES that encodes and compresses the secret message while being hidden through storing addresses instead of message bytes. Reducing the length of the stored message improves the storage capacity and makes the stego image visually less suspicious to the third party. The main idea behind the SCOPES approach is dividing the message into 3-character segments, seeking each segment in the cover image and storing the address of the position containing the segment instead of the segment itself. In this approach, every permutation of the 3 bytes (if found) can be stored along with some extra bits indicating the permutation. In some rare cases the segment may not be found in the image and this can cause the message to be expanded by some overhead bits2 instead of being compressed. But experimental results show that SCOPES performs overlay better than traditional LSB even in the worst cases.

  19. Ordered groups and infinite permutation groups

    CERN Document Server

    1996-01-01

    The subjects of ordered groups and of infinite permutation groups have long en­ joyed a symbiotic relationship. Although the two subjects come from very different sources, they have in certain ways come together, and each has derived considerable benefit from the other. My own personal contact with this interaction began in 1961. I had done Ph. D. work on sequence convergence in totally ordered groups under the direction of Paul Conrad. In the process, I had encountered "pseudo-convergent" sequences in an ordered group G, which are like Cauchy sequences, except that the differences be­ tween terms of large index approach not 0 but a convex subgroup G of G. If G is normal, then such sequences are conveniently described as Cauchy sequences in the quotient ordered group GIG. If G is not normal, of course GIG has no group structure, though it is still a totally ordered set. The best that can be said is that the elements of G permute GIG in an order-preserving fashion. In independent investigations around that t...

  20. A Comparison of Multiscale Permutation Entropy Measures in On-Line Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Su, Cui; Liang, Zhenhu; Li, Xiaoli; Li, Duan; Li, Yongwang; Ursino, Mauro

    2016-01-01

    Multiscale permutation entropy (MSPE) is becoming an interesting tool to explore neurophysiological mechanisms in recent years. In this study, six MSPE measures were proposed for on-line depth of anesthesia (DoA) monitoring to quantify the anesthetic effect on the real-time EEG recordings. The performance of these measures in describing the transient characters of simulated neural populations and clinical anesthesia EEG were evaluated and compared. Six MSPE algorithms-derived from Shannon permutation entropy (SPE), Renyi permutation entropy (RPE) and Tsallis permutation entropy (TPE) combined with the decomposition procedures of coarse-graining (CG) method and moving average (MA) analysis-were studied. A thalamo-cortical neural mass model (TCNMM) was used to generate noise-free EEG under anesthesia to quantitatively assess the robustness of each MSPE measure against noise. Then, the clinical anesthesia EEG recordings from 20 patients were analyzed with these measures. To validate their effectiveness, the ability of six measures were compared in terms of tracking the dynamical changes in EEG data and the performance in state discrimination. The Pearson correlation coefficient (R) was used to assess the relationship among MSPE measures. CG-based MSPEs failed in on-line DoA monitoring at multiscale analysis. In on-line EEG analysis, the MA-based MSPE measures at 5 decomposed scales could track the transient changes of EEG recordings and statistically distinguish the awake state, unconsciousness and recovery of consciousness (RoC) state significantly. Compared to single-scale SPE and RPE, MSPEs had better anti-noise ability and MA-RPE at scale 5 performed best in this aspect. MA-TPE outperformed other measures with faster tracking speed of the loss of unconsciousness. MA-based multiscale permutation entropies have the potential for on-line anesthesia EEG analysis with its simple computation and sensitivity to drug effect changes. CG-based multiscale permutation

  1. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....

  2. Permutation symmetry and the origin of fermion mass hierarchy

    International Nuclear Information System (INIS)

    Babu, K.S.; Mohapatra, R.N.

    1990-01-01

    A realization of the ''flavor-democracy'' approach to quark and lepton masses is provided in the context of the standard model with a horizontal S 3 permutation symmetry. In this model, t and b quarks pick up mass at the tree level, c, s-quark and τ-lepton masses arise at the one-loop level, u, d, and μ masses at the two-loop level, and the electron mass at the three-loop level, thus reproducing the observed hierarchial structure without fine tuning of the Yukawa couplings. The pattern of quark mixing angles also emerges naturally, with V us ,V cb ∼O(ε), V ub ∼O(ε 2 ), where ε is a loop expansion parameter

  3. The magic of universal quantum computing with permutations

    OpenAIRE

    Planat, Michel; Rukhsan-Ul-Haq

    2017-01-01

    The role of permutation gates for universal quantum computing is investigated. The \\lq magic' of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function and state-dependent contextuality (following many contributions on this subject). A first classification of main types of resulting magic states in low dimensions $d \\le 9$ is performed.

  4. Some topics on permutable subgroups in infinite groups

    OpenAIRE

    Ialenti, Roberto

    2017-01-01

    The aim of this thesis is to study permutability in different aspects of the theory of infinite groups. In particular, it will be studied the structure of groups in which all the members of a relevant system of subgroups satisfy a suitable generalized condition of permutability.

  5. A permutations representation that knows what " Eulerian" means

    Directory of Open Access Journals (Sweden)

    Roberto Mantaci

    2001-12-01

    Full Text Available Eulerian numbers (and ``Alternate Eulerian numbers'' are often interpreted as distributions of statistics defined over the Symmetric group. The main purpose of this paper is to define a way to represent permutations that provides some other combinatorial interpretations of these numbers. This representation uses a one-to-one correspondence between permutations and the so-called subexceedant functions.

  6. A Fast Algorithm for Generating Permutation Distribution of Ranks in ...

    African Journals Online (AJOL)

    ... function of the distribution of the ranks. This further gives insight into the permutation distribution of a rank statistics. The algorithm is implemented with the aid of the computer algebra system Mathematica. Key words: Combinatorics, generating function, permutation distribution, rank statistics, partitions, computer algebra.

  7. The Magic of Universal Quantum Computing with Permutations

    Directory of Open Access Journals (Sweden)

    Michel Planat

    2017-01-01

    Full Text Available The role of permutation gates for universal quantum computing is investigated. The “magic” of computation is clarified in the permutation gates, their eigenstates, the Wootters discrete Wigner function, and state-dependent contextuality (following many contributions on this subject. A first classification of a few types of resulting magic states in low dimensions d≤9 is performed.

  8. Permutation entropy with vector embedding delays

    Science.gov (United States)

    Little, Douglas J.; Kane, Deb M.

    2017-12-01

    Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.

  9. Non stationary nucleation: the model with minimal environment

    OpenAIRE

    Kurasov, Victor

    2013-01-01

    A new model to calculate the rate of nucleation is formulated. This model is based on the classical nucleation theory but considers also vapor depletion around the formed embryo. As the result the free energy has to be recalculated which brings a new expression for the nucleation rate.

  10. process setting models for the minimization of costs defectives

    African Journals Online (AJOL)

    Dr Obe

    determine the mean setting so as to minimise the total loss through under-limit complaints and loss of sales and goodwill as well as over-limit losses through excess materials and rework costs. Models are developed for the two types of setting of the mean so that the minimum costs of losses are achieved. Also, a model is ...

  11. Steam consumption minimization model in a multiple evaporation effect in a sugar plant

    International Nuclear Information System (INIS)

    Villada, Fernando; Valencia, Jaime A; Moreno, German; Murillo, J. Joaquin

    1992-01-01

    In this work, a mathematical model to minimize the steam consumption in a multiple effect evaporation system is shown. The model is based in the dynamic programming technique and the results are tested in a Colombian sugar mill

  12. A minimal model for a slow pacemaking neuron

    International Nuclear Information System (INIS)

    Zakharov, D.G.; Kuznetsov, A.

    2012-01-01

    Highlights: ► We have constructed a phenomenological model for slow pacemaking neurons. ► The model implements a nonlinearity introduced by an ion-dependent current. ► The new nonlinear dependence allows for differentiating responses to various stimuli. ► We discuss implications of our results for a broad class of neurons. - Abstract: We have constructed a phenomenological model for slow pacemaking neurons. These are neurons that generate very regular periodic oscillations of the membrane potential. Many of these neurons also differentially respond to various types of stimulation. The model is based on FitzHugh–Nagumo (FHN) oscillator and implements a nonlinearity introduced by a current that depends on an ion concentration. The comparison with the original FHN oscillator has shown that the new nonlinear dependence allows for differentiating responses to various stimuli. We discuss implications of our results for a broad class of neurons.

  13. Sneutrino warm inflation in the minimal supersymmetric model

    International Nuclear Information System (INIS)

    Bastero-Gil, Mar; Berera, Arjun

    2005-01-01

    The model of RH neutrino fields coupled to the MSSM is shown to yield a large parameter regime of warm inflation. In the strong dissipative regime, it is shown that inflation, driven by a single sneutrino field, occurs with all field amplitudes below the Planck scale. Analysis is also made of leptogenesis, neutrino mass generation and gravitino constraints. A new warm inflation scenario is purposed in which one scalar field drives a period of warm inflation and a second field drives a subsequent phase of reheating. Such a model is able to reduce the final temperature after inflation, thus helping to mitigate gravitino constraints

  14. Patchiness in a minimal nutrient – phytoplankton model

    Indian Academy of Sciences (India)

    The mean-field model without the diffusion and advection terms shows both bistability and limit-cycle oscillations as a few parameters such as the input rate of nutrients and the maximum feeding rate of zooplankton are changed. If the parameter values are chosen from the limit-cycle oscillation region, the corresponding ...

  15. Gamma-ray excess and the minimal dark matter model

    International Nuclear Information System (INIS)

    Duerr, Michael; Fileviez Perez, Pavel; Smirnov, Juri

    2015-10-01

    We point out that the gamma-ray excesses in the galactic center and in the dwarf galaxy Reticulum II can both be well explained within the simplest dark matter model. We find that the corresponding region of parameter space will be tested by direct and indirect dark matter searches in the near future.

  16. A minimal model of self-sustaining turbulence

    International Nuclear Information System (INIS)

    Thomas, Vaughan L.; Gayme, Dennice F.; Farrell, Brian F.; Ioannou, Petros J.

    2015-01-01

    In this work, we examine the turbulence maintained in a Restricted Nonlinear (RNL) model of plane Couette flow. This model is a computationally efficient approximation of the second order statistical state dynamics obtained by partitioning the flow into a streamwise averaged mean flow and perturbations about that mean, a closure referred to herein as the RNL ∞ model. The RNL model investigated here employs a single member of the infinite ensemble that comprises the covariance of the RNL ∞ dynamics. The RNL system has previously been shown to support self-sustaining turbulence with a mean flow and structural features that are consistent with direct numerical simulations (DNS). Regardless of the number of streamwise Fourier components used in the simulation, the RNL system’s self-sustaining turbulent state is supported by a small number of streamwise varying modes. Remarkably, further truncation of the RNL system’s support to as few as one streamwise varying mode can suffice to sustain the turbulent state. The close correspondence between RNL simulations and DNS that has been previously demonstrated along with the results presented here suggest that the fundamental mechanisms underlying wall-turbulence can be analyzed using these highly simplified RNL systems

  17. The Minimal Model of the Hypothalamic-Pituitary-Adrenal Axis

    DEFF Research Database (Denmark)

    Vinther, Frank; Andersen, Morten; Ottesen, Johnny T.

    2011-01-01

    -physiological values of the parameters are needed in order to achieve local instability of the fixed point. Small changes inphysiologically relevant parameters cause the system to be globally stable using the analytical criteria. All simulations show a globally stable fixed point, ruling out periodic solutions even...... are modeled as a system of three coupled, nonlinear differential equations. Experimental data shows the circadian as well as the ultradian rhythm. This paper focuses on the ultradian rhythm. The ultradian rhythm can mathematically be explained by oscillating solutions. Oscillating solutions to an ODE emerges...... from an unstable fixed point with complex eigenvalues with a positive real parts and a non-zero imaginary parts. The first part of the paper describes the general considerations to be obeyed for a mathematical model of the HPA axis. In this paper we only include the most widely accepted mechanisms...

  18. Identifiability and error minimization of receptor model parameters with PET

    International Nuclear Information System (INIS)

    Delforge, J.; Syrota, A.; Mazoyer, B.M.

    1989-01-01

    The identifiability problem and the general framework for experimental design optimization are presented. The methodology is applied to the problem of the receptor-ligand model parameter estimation with dynamic positron emission tomography data. The first attempts to identify the model parameters from data obtained with a single tracer injection led to disappointing numerical results. The possibility of improving parameter estimation using a new experimental design combining an injection of the labelled ligand and an injection of the cold ligand (displacement experiment) has been investigated. However, this second protocol led to two very different numerical solutions and it was necessary to demonstrate which solution was biologically valid. This has been possible by using a third protocol including both a displacement and a co-injection experiment. (authors). 16 refs.; 14 figs

  19. Random resistor network model of minimal conductivity in graphene.

    Science.gov (United States)

    Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L

    2007-10-26

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  20. Switching Adaptability in Human-Inspired Sidesteps: A Minimal Model.

    Science.gov (United States)

    Fujii, Keisuke; Yoshihara, Yuki; Tanabe, Hiroko; Yamamoto, Yuji

    2017-01-01

    Humans can adapt to abruptly changing situations by coordinating redundant components, even in bipedality. Conventional adaptability has been reproduced by various computational approaches, such as optimal control, neural oscillator, and reinforcement learning; however, the adaptability in bipedal locomotion necessary for biological and social activities, such as unpredicted direction change in chase-and-escape, is unknown due to the dynamically unstable multi-link closed-loop system. Here we propose a switching adaptation model for performing bipedal locomotion by improving autonomous distributed control, where autonomous actuators interact without central control and switch the roles for propulsion, balancing, and leg swing. Our switching mobility model achieved direction change at any time using only three actuators, although it showed higher motor costs than comparable models without direction change. Our method of evaluating such adaptation at any time should be utilized as a prerequisite for understanding universal motor control. The proposed algorithm may simply explain and predict the adaptation mechanism in human bipedality to coordinate the actuator functions within and between limbs.

  1. Exploring a minimal two-component p53 model

    International Nuclear Information System (INIS)

    Sun, Tingzhe; Zhu, Feng; Shen, Pingping; Yuan, Ruoshi; Xu, Wei

    2010-01-01

    The tumor suppressor p53 coordinates many attributes of cellular processes via interlocked feedback loops. To understand the biological implications of feedback loops in a p53 system, a two-component model which encompasses essential feedback loops was constructed and further explored. Diverse bifurcation properties, such as bistability and oscillation, emerge by manipulating the feedback strength. The p53-mediated MDM2 induction dictates the bifurcation patterns. We first identified irradiation dichotomy in p53 models and further proposed that bistability and oscillation can behave in a coordinated manner. Further sensitivity analysis revealed that p53 basal production and MDM2-mediated p53 degradation, which are central to cellular control, are most sensitive processes. Also, we identified that the much more significant variations in amplitude of p53 pulses observed in experiments can be derived from overall amplitude parameter sensitivity. The combined approach with bifurcation analysis, stochastic simulation and sampling-based sensitivity analysis not only gives crucial insights into the dynamics of the p53 system, but also creates a fertile ground for understanding the regulatory patterns of other biological networks

  2. Radiative breaking of the minimal supersymmetric left–right model

    Directory of Open Access Journals (Sweden)

    Nobuchika Okada

    2016-05-01

    Full Text Available We study a variation to the SUSY Left–Right symmetric model based on the gauge group SU(3c×SU(2L×SU(2R×U(1BL. Beyond the quark and lepton superfields we only introduce a second Higgs bidoublet to produce realistic fermion mass matrices. This model does not include any SU(2R triplets. We calculate renormalization group evolutions of soft SUSY parameters at the one-loop level down to low energy. We find that an SU(2R slepton doublet acquires a negative mass squared at low energies, so that the breaking of SU(2R×U(1BL→U(1Y is realized by a non-zero vacuum expectation value of a right-handed sneutrino. Small neutrino masses are produced through neutrino mixings with gauginos. Mass limits on the SU(2R×U(1BL sector are obtained by direct search results at the LHC as well as lepton-gaugino mixing bounds from the LEP precision data.

  3. A minimal model of burst-noise induced bistability.

    Directory of Open Access Journals (Sweden)

    Johannes Falk

    Full Text Available We investigate the influence of intrinsic noise on stable states of a one-dimensional dynamical system that shows in its deterministic version a saddle-node bifurcation between monostable and bistable behaviour. The system is a modified version of the Schlögl model, which is a chemical reaction system with only one type of molecule. The strength of the intrinsic noise is varied without changing the deterministic description by introducing bursts in the autocatalytic production step. We study the transitions between monostable and bistable behavior in this system by evaluating the number of maxima of the stationary probability distribution. We find that changing the size of bursts can destroy and even induce saddle-node bifurcations. This means that a bursty production of molecules can qualitatively change the dynamics of a chemical reaction system even when the deterministic description remains unchanged.

  4. Human Inferences about Sequences: A Minimal Transition Probability Model.

    Directory of Open Access Journals (Sweden)

    Florent Meyniel

    2016-12-01

    Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.

  5. A minimal rupture cascade model for living cell plasticity

    Science.gov (United States)

    Polizzi, Stefano; Laperrousaz, Bastien; Perez-Reche, Francisco J.; Nicolini, Franck E.; Maguer Satta, Véronique; Arneodo, Alain; Argoul, Françoise

    2018-05-01

    Under physiological and pathological conditions, cells experience large forces and deformations that often exceed the linear viscoelastic regime. Here we drive CD34+ cells isolated from healthy and leukemic bone marrows in the highly nonlinear elasto-plastic regime, by poking their perinuclear region with a sharp AFM cantilever tip. We use the wavelet transform mathematical microscope to identify singular events in the force-indentation curves induced by local rupture events in the cytoskeleton (CSK). We distinguish two types of rupture events, brittle failures likely corresponding to irreversible ruptures in a stiff and highly cross-linked CSK and ductile failures resulting from dynamic cross-linker unbindings during plastic deformation without loss of CSK integrity. We propose a stochastic multiplicative cascade model of mechanical ruptures that reproduces quantitatively the experimental distributions of the energy released during these events, and provides some mathematical and mechanistic understanding of the robustness of the log-normal statistics observed in both brittle and ductile situations. We also show that brittle failures are relatively more prominent in leukemia than in healthy cells suggesting their greater fragility.

  6. Analysis of NIF experiments with the minimal energy implosion model

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, B., E-mail: bcheng@lanl.gov; Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Batha, S. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Cerjan, C. J. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2015-08-15

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.

  7. Analysis of NIF experiments with the minimal energy implosion model

    International Nuclear Information System (INIS)

    Cheng, B.; Kwan, T. J. T.; Wang, Y. M.; Merrill, F. E.; Batha, S. H.; Cerjan, C. J.

    2015-01-01

    We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-induced instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments

  8. Multiscale permutation entropy analysis of electrocardiogram

    Science.gov (United States)

    Liu, Tiebing; Yao, Wenpo; Wu, Min; Shi, Zhaorong; Wang, Jun; Ning, Xinbao

    2017-04-01

    To make a comprehensive nonlinear analysis to ECG, multiscale permutation entropy (MPE) was applied to ECG characteristics extraction to make a comprehensive nonlinear analysis of ECG. Three kinds of ECG from PhysioNet database, congestive heart failure (CHF) patients, healthy young and elderly subjects, are applied in this paper. We set embedding dimension to 4 and adjust scale factor from 2 to 100 with a step size of 2, and compare MPE with multiscale entropy (MSE). As increase of scale factor, MPE complexity of the three ECG signals are showing first-decrease and last-increase trends. When scale factor is between 10 and 32, complexities of the three ECG had biggest difference, entropy of the elderly is 0.146 less than the CHF patients and 0.025 larger than the healthy young in average, in line with normal physiological characteristics. Test results showed that MPE can effectively apply in ECG nonlinear analysis, and can effectively distinguish different ECG signals.

  9. A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis

    KAUST Repository

    Kannan, Venkateshan; Kiani, Narsis A.; Piehl, Fredrik; Tegner, Jesper

    2017-01-01

    Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving

  10. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  11. Permutation groups and transformation semigroups : results and problems

    OpenAIRE

    Araujo, Joao; Cameron, Peter Jephson

    2015-01-01

    J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group $G$ and a single non-permutation $a$. Our particular concern is the influence that properties of $G$ (related to homogeneity, trans...

  12. Implementation and automated validation of the minimal Z' model in FeynRules

    International Nuclear Information System (INIS)

    Basso, L.; Christensen, N.D.; Duhr, C.; Fuks, B.; Speckner, C.

    2012-01-01

    We describe the implementation of a well-known class of U(1) gauge models, the 'minimal' Z' models, in FeynRules. We also describe a new automated validation tool for FeynRules models which is controlled by a web interface and allows the user to run a complete set of 2 → 2 processes on different matrix element generators, different gauges, and compare between them all. If existing, the comparison with independent implementations is also possible. This tool has been used to validate our implementation of the 'minimal' Z' models. (authors)

  13. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    Science.gov (United States)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  14. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    Science.gov (United States)

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  15. Minimal Adequate Model of Unemployment Duration in the Post-Crisis Czech Republic

    Directory of Open Access Journals (Sweden)

    Adam Čabla

    2016-03-01

    Full Text Available Unemployment is one of the leading economic problems in a developed world. The aim of this paper is to identify the differences in unemployment duration in different strata in the post-crisis Czech Republic via building a minimal adequate model, and to quantify the differences. Data from Labour Force Surveys are used and since they are interval censored in nature, proper metodology must be used. The minimal adequate model is built through the accelerated failure time modelling, maximum likelihood estimates and likelihood ratio tests. Variables at the beginning are sex, marital status, age, education, municipality size and number of persons in a household, containing altogether 29 model parameters. The minimal adequate model contains 5 parameters and differences are found between men and women, the youngest category and the rest and the university educated and the rest. The estimated expected values, variances, medians, modes and 90th percentiles are provided for all subgroups.

  16. Likelihood analysis of the next-to-minimal supergravity motivated model

    International Nuclear Information System (INIS)

    Balazs, Csaba; Carter, Daniel

    2009-01-01

    In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).

  17. Triviality bound on lightest Higgs mass in next to minimal supersymmetric model

    International Nuclear Information System (INIS)

    Choudhury, S.R.; Mamta; Dutta, Sukanta

    1998-01-01

    We study the implication of triviality on Higgs sector in next to minimal supersymmetric model (NMSSM) using variational field theory. It is shown that the mass of the lightest Higgs boson in NMSSM has an upper bound ∼ 10 M w which is of the same order as that in the standard model. (author)

  18. Surface states of a system of Dirac fermions: A minimal model

    International Nuclear Information System (INIS)

    Volkov, V. A.; Enaldiev, V. V.

    2016-01-01

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  19. Non-minimal Maxwell-Chern-Simons theory and the composite Fermion model

    International Nuclear Information System (INIS)

    Paschoal, Ricardo C.; Helayel Neto, Jose A.

    2003-01-01

    The magnetic field redefinition in Jain's composite fermion model for the fractional quantum Hall effect is shown to be effective described by a mean-field approximation of a model containing a Maxwell-Chern-Simons gauge field nominally coupled to matter. Also an explicit non-relativistic limit of the non-minimal (2+1) D Dirac's equation is derived. (author)

  20. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  1. Surface states of a system of Dirac fermions: A minimal model

    Energy Technology Data Exchange (ETDEWEB)

    Volkov, V. A., E-mail: volkov.v.a@gmail.com; Enaldiev, V. V. [Russian Academy of Sciences, Kotel’nikov Institute of Radio Engineering and Electronics (Russian Federation)

    2016-03-15

    A brief survey is given of theoretical works on surface states (SSs) in Dirac materials. Within the formalism of envelope wave functions and boundary conditions for these functions, a minimal model is formulated that analytically describes surface and edge states of various (topological and nontopological) types in several systems with Dirac fermions (DFs). The applicability conditions of this model are discussed.

  2. Neutral Higgs bosons in the standard model and in the minimal ...

    Indian Academy of Sciences (India)

    assumed to be CP invariant. Finally, we discuss an alternative MSSM scenario including. CP violation in the Higgs sector. Keywords. Higgs bosons; standard model; minimal supersymmetric model; searches at LEP. 1. Introduction. One of the challenges in high-energy particle physics is the discovery of Higgs bosons.

  3. Discrete Chebyshev nets and a universal permutability theorem

    International Nuclear Information System (INIS)

    Schief, W K

    2007-01-01

    The Pohlmeyer-Lund-Regge system which was set down independently in the contexts of Lagrangian field theories and the relativistic motion of a string and which played a key role in the development of a geometric interpretation of soliton theory is known to appear in a variety of important guises such as the vectorial Lund-Regge equation, the O(4) nonlinear σ-model and the SU(2) chiral model. Here, it is demonstrated that these avatars may be discretized in such a manner that both integrability and equivalence are preserved. The corresponding discretization procedure is geometric and algebraic in nature and based on discrete Chebyshev nets and generalized discrete Lelieuvre formulae. In connection with the derivation of associated Baecklund transformations, it is shown that a generalized discrete Lund-Regge equation may be interpreted as a universal permutability theorem for integrable equations which admit commuting matrix Darboux transformations acting on su(2) linear representations. Three-dimensional coordinate systems and lattices of 'Lund-Regge' type related to particular continuous and discrete Zakharov-Manakov systems are obtained as a by-product of this analysis

  4. Predecessor and permutation existence problems for sequential dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Rosenkrantz, D. J. (Daniel J.); Stearns, R. E. (Richard E.)

    2002-01-01

    A class of finite discrete dynamical systems, called Sequential Dynamical Systems (SDSs), was introduced in BMR99, BR991 as a formal model for analyzing simulation systems. An SDS S is a triple (G, F,n ),w here (i) G(V,E ) is an undirected graph with n nodes with each node having a state, (ii) F = (fi, fi, . . ., fn), with fi denoting a function associated with node ui E V and (iii) A is a permutation of (or total order on) the nodes in V, A configuration of an SDS is an n-vector ( b l, bz, . . ., bn), where bi is the value of the state of node vi. A single SDS transition from one configuration to another is obtained by updating the states of the nodes by evaluating the function associated with each of them in the order given by n. Here, we address the complexity of two basic problems and their generalizations for SDSs. Given an SDS S and a configuration C, the PREDECESSOR EXISTENCE (or PRE) problem is to determine whether there is a configuration C' such that S has a transition from C' to C. (If C has no predecessor, C is known as a garden of Eden configuration.) Our results provide separations between efficiently solvable and computationally intractable instances of the PRE problem. For example, we show that the PRE problem can be solved efficiently for SDSs with Boolean state values when the node functions are symmetric and the underlying graph is of bounded treewidth. In contrast, we show that allowing just one non-symmetric node function renders the problem NP-complete even when the underlying graph is a tree (which has a treewidth of 1). We also show that the PRE problem is efficiently solvable for SDSs whose state values are from a field and whose node functions are linear. Some of the polynomial algorithms also extend to the case where we want to find an ancestor configuration that precedes a given configuration by a logarithmic number of steps. Our results extend some of the earlier results by Sutner [Su95] and Green [@87] on the complexity of

  5. Application of mathematical models to metronomic chemotherapy: What can be inferred from minimal parameterized models?

    Science.gov (United States)

    Ledzewicz, Urszula; Schättler, Heinz

    2017-08-10

    Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  7. Determining the parity of a permutation using an experimental NMR qutrit

    International Nuclear Information System (INIS)

    Dogra, Shruti; Arvind,; Dorai, Kavita

    2014-01-01

    We present the NMR implementation of a recently proposed quantum algorithm to find the parity of a permutation. In the usual qubit model of quantum computation, it is widely believed that computational speedup requires the presence of entanglement and thus cannot be achieved by a single qubit. On the other hand, a qutrit is qualitatively more quantum than a qubit because of the existence of quantum contextuality and a single qutrit can be used for computing. We use the deuterium nucleus oriented in a liquid crystal as the experimental qutrit. This is the first experimental exploitation of a single qutrit to carry out a computational task. - Highlights: • NMR implementation of a quantum algorithm to determine the parity of a permutation. • Algorithm implemented on a single qutrit. • Computational speedup achieved without quantum entanglement. • Single qutrit shows quantum contextuality

  8. Information transmission and signal permutation in active flow networks

    Science.gov (United States)

    Woodhouse, Francis G.; Fawcett, Joanna B.; Dunkel, Jörn

    2018-03-01

    Recent experiments show that both natural and artificial microswimmers in narrow channel-like geometries will self-organise to form steady, directed flows. This suggests that networks of flowing active matter could function as novel autonomous microfluidic devices. However, little is known about how information propagates through these far-from-equilibrium systems. Through a mathematical analogy with spin-ice vertex models, we investigate here the input–output characteristics of generic incompressible active flow networks (AFNs). Our analysis shows that information transport through an AFN is inherently different from conventional pressure or voltage driven networks. Active flows on hexagonal arrays preserve input information over longer distances than their passive counterparts and are highly sensitive to bulk topological defects, whose presence can be inferred from marginal input–output distributions alone. This sensitivity further allows controlled permutations on parallel inputs, revealing an unexpected link between active matter and group theory that can guide new microfluidic mixing strategies facilitated by active matter and aid the design of generic autonomous information transport networks.

  9. Permuted tRNA genes of Cyanidioschyzon merolae, the origin of the tRNA molecule and the root of the Eukarya domain.

    Science.gov (United States)

    Di Giulio, Massimo

    2008-08-07

    An evolutionary analysis is conducted on the permuted tRNA genes of Cyanidioschyzon merolae, in which the 5' half of the tRNA molecule is codified at the 3' end of the gene and its 3' half is codified at the 5' end. This analysis has shown that permuted genes cannot be considered as derived traits but seem to possess characteristics that suggest they are ancestral traits, i.e. they originated when tRNA molecule genes originated for the first time. In particular, if the hypothesis that permuted genes are a derived trait were true, then we should not have been able to observe that the most frequent class of permuted genes is that of the anticodon loop type, for the simple reason that this class would derive by random permutation from a class of non-permuted tRNA genes, which instead is the rarest. This would not explain the high frequency with which permuted tRNA genes with perfectly separate 5' and 3' halves were observed. Clearly the mechanism that produced this class of permuted genes would envisage the existence, in an advanced stage of evolution, of minigenes codifying for the 5' and 3' halves of tRNAs which were assembled in a permuted way at the origin of the tRNA molecule, thus producing a high frequency of permuted genes of the class here referred. Therefore, this evidence supports the hypothesis that the genes of the tRNA molecule were assembled by minigenes codifying for hairpin-like RNA molecules, as suggested by one model for the origin of tRNA [Di Giulio, M., 1992. On the origin of the transfer RNA molecule. J. Theor. Biol. 159, 199-214; Di Giulio, M., 1999. The non-monophyletic origin of tRNA molecule. J. Theor. Biol. 197, 403-414]. Moreover, the late assembly of the permuted genes of C. merolae, as well as their ancestrality, strengthens the hypothesis of the polyphyletic origins of these genes. Finally, on the basis of the uniqueness and the ancestrality of these permuted genes, I suggest that the root of the Eukarya domain is in the super

  10. A non-minimally coupled quintom dark energy model on the warped DGP brane

    International Nuclear Information System (INIS)

    Nozari, K; Azizi, T; Setare, M R; Behrouz, N

    2009-01-01

    We construct a quintom dark energy model with two non-minimally coupled scalar fields, one quintessence and the other phantom field, confined to the warped Dvali-Gabadadze-Porrati (DGP) brane. We show that this model accounts for crossing of the phantom divide line in appropriate subspaces of the model parameter space. This crossing occurs for both normal and self-accelerating branches of this DGP-inspired setup.

  11. Minimal time spiking in various ChR2-controlled neuron models.

    Science.gov (United States)

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  12. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  13. A golden A5 model of leptons with a minimal NLO correction

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Stuart, Alexander J.

    2013-01-01

    We propose a new A 5 model of leptons which corrects the LO predictions of Golden Ratio mixing via a minimal NLO Majorana mass correction which completely breaks the original Klein symmetry of the neutrino mass matrix. The minimal nature of the NLO correction leads to a restricted and correlated range of the mixing angles allowing agreement within the one sigma range of recent global fits following the reactor angle measurement by Daya Bay and RENO. The minimal NLO correction also preserves the LO inverse neutrino mass sum rule leading to a neutrino mass spectrum that extends into the quasi-degenerate region allowing the model to be accessible to the current and future neutrinoless double beta decay experiments

  14. A Minimal Model to Explore the Influence of Distant Modes on Mode-Coupling Instabilities

    Science.gov (United States)

    Kruse, Sebastian; Hoffmann, Norbert

    2010-09-01

    The phenomenon of mode-coupling instability is one of the most frequently explored mechanisms to explain self-excited oscillation in sliding systems with friction. A mode coupling instability is usually due to the coupling of two modes. However, further modes can have an important influence on the coupling of two modes. This work extends a well-known minimal model to describe mode-coupling instabilities in order to explore the influence of a distant mode on the classical mode-coupling pattern. This work suggests a new minimal model. The model is explored and it is shown that a third mode can have significant influence on the classical mode-coupling instabilities where two modes are coupling. Different phenomena are analysed and it is pointed out that distant modes can only be ignored in very special cases and that the onset friction-induced oscillations can even be very sensitive to minimal variation of a distant mode. Due to the chosen academic minimal-model and the abandonment of a complex Finite-Element model the insight stays rather phenomenological but a better understanding of the mode-coupling mechnanism can be gained.

  15. Bounds on the Higgs mass in the standard model and the minimal supersymmetric standard model

    CERN Document Server

    Quiros, M.

    1995-01-01

    Depending on the Higgs-boson and top-quark masses, M_H and M_t, the effective potential of the {\\bf Standard Model} can develop a non-standard minimum for values of the field much larger than the weak scale. In those cases the standard minimum becomes metastable and the possibility of decay to the non-standard one arises. Comparison of the decay rate to the non-standard minimum at finite (and zero) temperature with the corresponding expansion rate of the Universe allows to identify the region, in the (M_H, M_t) plane, where the Higgs field is sitting at the standard electroweak minimum. In the {\\bf Minimal Supersymmetric Standard Model}, approximate analytical expressions for the Higgs mass spectrum and couplings are worked out, providing an excellent approximation to the numerical results which include all next-to-leading-log corrections. An appropriate treatment of squark decoupling allows to consider large values of the stop and/or sbottom mixing parameters and thus fix a reliable upper bound on the mass o...

  16. Permutational symmetries for coincidence rates in multimode multiphotonic interferometry

    Science.gov (United States)

    Khalid, Abdullah; Spivak, Dylan; Sanders, Barry C.; de Guise, Hubert

    2018-06-01

    We obtain coincidence rates for passive optical interferometry by exploiting the permutational symmetries of partially distinguishable input photons, and our approach elucidates qualitative features of multiphoton coincidence landscapes. We treat the interferometer input as a product state of any number of photons in each input mode with photons distinguished by their arrival time. Detectors at the output of the interferometer count photons from each output mode over a long integration time. We generalize and prove the claim of Tillmann et al. [Phys. Rev. X 5, 041015 (2015), 10.1103/PhysRevX.5.041015] that coincidence rates can be elegantly expressed in terms of immanants. Immanants are functions of matrices that exhibit permutational symmetries and the immanants appearing in our coincidence-rate expressions share permutational symmetries with the input state. Our results are obtained by employing representation theory of the symmetric group to analyze systems of an arbitrary number of photons in arbitrarily sized interferometers.

  17. Secure physical layer using dynamic permutations in cognitive OFDMA systems

    DEFF Research Database (Denmark)

    Meucci, F.; Wardana, Satya Ardhy; Prasad, Neeli R.

    2009-01-01

    This paper proposes a novel lightweight mechanism for a secure Physical (PHY) layer in Cognitive Radio Network (CRN) using Orthogonal Frequency Division Multiplexing (OFDM). User's data symbols are mapped over the physical subcarriers with a permutation formula. The PHY layer is secured...... with a random and dynamic subcarrier permutation which is based on a single pre-shared information and depends on Dynamic Spectrum Access (DSA). The dynamic subcarrier permutation is varying over time, geographical location and environment status, resulting in a very robust protection that ensures...... confidentiality. The method is shown to be effective also for existing non-cognitive systems. The proposed mechanism is effective against eavesdropping even if the eavesdropper adopts a long-time patterns analysis, thus protecting cryptography techniques of higher layers. The correlation properties...

  18. Scattering matrices for Φ1,2 perturbed conformal minimal models in absence of kink states

    International Nuclear Information System (INIS)

    Koubek, A.; Martins, M.J.; Mussardo, G.

    1991-05-01

    We determine the spectrum and the factorizable S-matrices of the massive excitations of the nonunitary minimal models M 2,2n+1 perturbed by the operator Φ 1,2 . These models present no kinks as asymptotic states, as follows from the reduction of the Zhiber-Mikhailov-Shabat model with respect to the quantum group SL(2) q found by Smirnov. We also give the whole set of S-matrices of the nonunitary minimal model M 2,9 perturbed by the operator Φ 1,4 , which is related to a RSOS reduction for the Φ 1.2 operator of the unitary model M 8,9 . The thermodynamical Bethe ansatz and the truncated conformal space approach are applied to these scattering theories in order to support their interpretation. (orig.)

  19. An approach to gauge hierarchy in the minimal SU(5) model of grand unification

    International Nuclear Information System (INIS)

    Ghose, P.

    1982-08-01

    It is shown that if all mass generation through spontaneous symmetry breaking is predominantly caused by scalar loops in the minimal SU(5) model of grand unification, it is possible to have an arbitrarily large gauge hierarchy msub(x) >> msub(w) with all Higgs bosons superheavy. No fine tuning is necessary in every order. (author)

  20. Esscher transforms and the minimal entropy martingale measure for exponential Lévy models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Sgarra, C.

    In this paper we offer a systematic survey and comparison of the Esscher martingale transform for linear processes, the Esscher martingale transform for exponential processes, and the minimal entropy martingale measure for exponential lévy models and present some new results in order to give...

  1. Minimal representations of supersymmetry and 1D N-extended σ-models

    International Nuclear Information System (INIS)

    Toppan, Francesco

    2008-01-01

    We discuss the minimal representations of the 1D N-Extended Supersymmetry algebra (the Z 2 -graded symmetry algebra of the Supersymmetric Quantum Mechanics) linearly realized on a finite number of fields depending on a real parameter t, the time. Their knowledge allows to construct one-dimensional sigma-models with extended off-shell supersymmetries without using superfields (author)

  2. Permutation entropy of fractional Brownian motion and fractional Gaussian noise

    International Nuclear Information System (INIS)

    Zunino, L.; Perez, D.G.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2008-01-01

    We have worked out theoretical curves for the permutation entropy of the fractional Brownian motion and fractional Gaussian noise by using the Bandt and Shiha [C. Bandt, F. Shiha, J. Time Ser. Anal. 28 (2007) 646] theoretical predictions for their corresponding relative frequencies. Comparisons with numerical simulations show an excellent agreement. Furthermore, the entropy-gap in the transition between these processes, observed previously via numerical results, has been here theoretically validated. Also, we have analyzed the behaviour of the permutation entropy of the fractional Gaussian noise for different time delays

  3. A studentized permutation test for three-arm trials in the 'gold standard' design.

    Science.gov (United States)

    Mütze, Tobias; Konietschke, Frank; Munk, Axel; Friede, Tim

    2017-03-15

    The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. CP asymmetry in tau slepton decay in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Yang Weimin; Du Dongsheng

    2002-01-01

    We investigate CP violation asymmetry in the decay of a tau slepton into a tau neutrino and a chargino in the minimal supersymmetric standard model. The new source of CP violation is the complex mixing in the tau slepton sector. The rate asymmetry between the decays of the tau slepton and its CP conjugate process can be of the order of 10 -3 in some region of the parameter space of the minimal supergravity scenario, which will possibly be detectable in near-future collider experiments

  5. A minimal spatial cell lineage model of epithelium: tissue stratification and multi-stability

    Science.gov (United States)

    Yeh, Wei-Ting; Chen, Hsuan-Yi

    2018-05-01

    A minimal model which includes spatial and cell lineage dynamics for stratified epithelia is presented. The dependence of tissue steady state on cell differentiation models, cell proliferation rate, cell differentiation rate, and other parameters are studied numerically and analytically. Our minimal model shows some important features. First, we find that morphogen or mechanical stress mediated interaction is necessary to maintain a healthy stratified epithelium. Furthermore, comparing with tissues in which cell differentiation can take place only during cell division, tissues in which cell division and cell differentiation are decoupled can achieve relatively higher degree of stratification. Finally, our model also shows that in the presence of short-range interactions, it is possible for a tissue to have multiple steady states. The relation between our results and tissue morphogenesis or lesion is discussed.

  6. A Minimal Supersymmetric Model of Particle Physics and the Early Universe

    CERN Document Server

    Buchmüller, W; Kamada, K; Schmitz, K

    2014-01-01

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local $B$$-$$L$, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken $B$$-$$L$ symmetry, which ends in tachyonic preheating, i.e.\\ the decay of the false vacuum, followed by a matter dominated phase with heavy $B$$-$$L$ Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of g...

  7. BRST cohomology ring in 2D gravity coupled to minimal models

    International Nuclear Information System (INIS)

    Kanno, H.; Sarmadi, M.H.

    1992-08-01

    The ring structure of Lian-Zuckerman states for (q,p) minimal models coupled to gravity is shown to be R=R 0 xC[w,w -1 ] where R 0 is the ring of ghost number zero operators generated by two elements and w is an operator of ghost number -1. Some examples are discussed in detail. For these models the currents are also discussed and their algebra is shown to contain the Virasoro algebra. (author). 21 refs

  8. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    International Nuclear Information System (INIS)

    Sardanyes, Josep; Sole, Ricard V.

    2008-01-01

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations

  9. On exotic supersymmetries of the φ1,3 deformation of minimal models

    International Nuclear Information System (INIS)

    Kadiri, A.; Saidi, E.H.; Zerouaoui, S.J.; Sedra, M.B.

    1994-07-01

    Using algebraic and field theoretical methods, we study the fractional spin symmetries of the φ 1,3 deformation of minimal models. The particular example of the D=2 three state tricritical Potts model is examined in detail. Various models based on subalgebras and appropriate discrete automorphism groups of the two dimensional fractional spin algebra are obtained. General features such as superspace and superfield representations, the U q (sl 2 ) symmetry, the spontaneous exotic supersymmetry breaking, relations with the N=2 Landau Ginzburg models as well as other things are discussed. (author). 24 refs

  10. Structural differences of matrix metalloproteinases. Homology modeling and energy minimization of enzyme-substrate complexes

    DEFF Research Database (Denmark)

    Terp, G E; Christensen, I T; Jørgensen, Flemming Steen

    2000-01-01

    Matrix metalloproteinases are extracellular enzymes taking part in the remodeling of extracellular matrix. The structures of the catalytic domain of MMP1, MMP3, MMP7 and MMP8 are known, but structures of enzymes belonging to this family still remain to be determined. A general approach...... to the homology modeling of matrix metalloproteinases, exemplified by the modeling of MMP2, MMP9, MMP12 and MMP14 is described. The models were refined using an energy minimization procedure developed for matrix metalloproteinases. This procedure includes incorporation of parameters for zinc and calcium ions...... in the AMBER 4.1 force field, applying a non-bonded approach and a full ion charge representation. Energy minimization of the apoenzymes yielded structures with distorted active sites, while reliable three-dimensional structures of the enzymes containing a substrate in active site were obtained. The structural...

  11. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    Science.gov (United States)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  12. Infinity-Norm Permutation Covering Codes from Cyclic Groups

    OpenAIRE

    Karni, Ronen; Schwartz, Moshe

    2017-01-01

    We study covering codes of permutations with the $\\ell_\\infty$-metric. We provide a general code construction, which uses smaller building-block codes. We study cyclic transitive groups as building blocks, determining their exact covering radius, and showing linear-time algorithms for finding a covering codeword. We also bound the covering radius of relabeled cyclic transitive groups under conjugation.

  13. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  14. Mixed-order phase transition in a minimal, diffusion-based spin model.

    Science.gov (United States)

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  15. Parallel-Batch Scheduling with Two Models of Deterioration to Minimize the Makespan

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2014-01-01

    Full Text Available We consider the bounded parallel-batch scheduling with two models of deterioration, in which the processing time of the first model is pj=aj+αt and of the second model is pj=a+αjt. The objective is to minimize the makespan. We present O(n log n time algorithms for the single-machine problems, respectively. And we propose fully polynomial time approximation schemes to solve the identical-parallel-machine problem and uniform-parallel-machine problem, respectively.

  16. Neutrino CP violation and sign of baryon asymmetry in the minimal seesaw model

    Science.gov (United States)

    Shimizu, Yusuke; Takagi, Kenta; Tanimoto, Morimitsu

    2018-03-01

    We discuss the correlation between the CP violating Dirac phase of the lepton mixing matrix and the cosmological baryon asymmetry based on the leptogenesis in the minimal seesaw model with two right-handed Majorana neutrinos and the trimaximal mixing for neutrino flavors. The sign of the CP violating Dirac phase at low energy is fixed by the observed cosmological baryon asymmetry since there is only one phase parameter in the model. According to the recent T2K and NOνA data of the CP violation, the Dirac neutrino mass matrix of our model is fixed only for the normal hierarchy of neutrino masses.

  17. Minimal Z′ models and the 125 GeV Higgs boson

    International Nuclear Information System (INIS)

    Basso, L.

    2013-01-01

    The 1-loop renormalization group equations for the minimal Z ′ models encompassing a type-I seesaw mechanism are studied in the light of the 125 GeV Higgs boson observation. This model is taken as a benchmark for the general case of singlet extensions of the standard model. The most important result is that negative scalar mixing angles are favored with respect to positive values. Further, a minimum value for the latter exists, as well as a maximum value for the masses of the heavy neutrinos, depending on the vacuum expectation value of the singlet scalar

  18. Electromyographic permutation entropy quantifies diaphragmatic denervation and reinnervation.

    Directory of Open Access Journals (Sweden)

    Christopher Kramer

    Full Text Available Spontaneous reinnervation after diaphragmatic paralysis due to trauma, surgery, tumors and spinal cord injuries is frequently observed. A possible explanation could be collateral reinnervation, since the diaphragm is commonly double-innervated by the (accessory phrenic nerve. Permutation entropy (PeEn, a complexity measure for time series, may reflect a functional state of neuromuscular transmission by quantifying the complexity of interactions across neural and muscular networks. In an established rat model, electromyographic signals of the diaphragm after phrenicotomy were analyzed using PeEn quantifying denervation and reinnervation. Thirty-three anesthetized rats were unilaterally phrenicotomized. After 1, 3, 9, 27 and 81 days, diaphragmatic electromyographic PeEn was analyzed in vivo from sternal, mid-costal and crural areas of both hemidiaphragms. After euthanasia of the animals, both hemidiaphragms were dissected for fiber type evaluation. The electromyographic incidence of an accessory phrenic nerve was 76%. At day 1 after phrenicotomy, PeEn (normalized values was significantly diminished in the sternal (median: 0.69; interquartile range: 0.66-0.75 and mid-costal area (0.68; 0.66-0.72 compared to the non-denervated side (0.84; 0.78-0.90 at threshold p<0.05. In the crural area, innervated by the accessory phrenic nerve, PeEn remained unchanged (0.79; 0.72-0.86. During reinnervation over 81 days, PeEn normalized in the mid-costal area (0.84; 0.77-0.86, whereas it remained reduced in the sternal area (0.77; 0.70-0.81. Fiber type grouping, a histological sign for reinnervation, was found in the mid-costal area in 20% after 27 days and in 80% after 81 days. Collateral reinnervation can restore diaphragm activity after phrenicotomy. Electromyographic PeEn represents a new, distinctive assessment characterizing intramuscular function following denervation and reinnervation.

  19. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  20. Precision electroweak tests of the minimal and flipped SU(5) supergravity models

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, J.L.; Nanopoulos, D.V.; Park, G.T.; Pois, H.; Yuan, K. (Center for Theoretical Physics, Department of Physics, Texas A M University, College Station, Texas 77843-4242 (United States) Astroparticle Physics Group, Houston Advanced Research Center (HARC), The Woodlands, Texas 77381 (United States))

    1993-10-01

    We explore the one-loop electroweak radiative corrections in the minimal SU(5) and the no-scale flipped SU(5) supergravity models via explicit calculation of vacuum polarization contributions to the [epsilon][sub 1,2,3] parameters. Experimentally, [epsilon][sub 1,2,3] are obtained from a global fit to the CERN LEP observables, and [ital M][sub [ital W

  1. A minimally-resolved immersed boundary model for reaction-diffusion problems

    OpenAIRE

    Pal Singh Bhalla, A; Griffith, BE; Patankar, NA; Donev, A

    2013-01-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blo...

  2. Neurophysiological model of tinnitus: dependence of the minimal masking level on treatment outcome.

    Science.gov (United States)

    Jastreboff, P J; Hazell, J W; Graham, R L

    1994-11-01

    Validity of the neurophysiological model of tinnitus (Jastreboff, 1990), outlined in this paper, was tested on data from multicenter trial of tinnitus masking (Hazell et al., 1985). Minimal masking level, intensity match of tinnitus, and the threshold of hearing have been evaluated on a total of 382 patients before and after 6 months of treatment with maskers, hearing aids, or combination devices. The data has been divided into categories depending on treatment outcome and type of approach used. Results of analysis revealed that: i) the psychoacoustical description of tinnitus does not possess a predictive value for the outcome of the treatment; ii) minimal masking level changed significantly depending on the treatment outcome, decreasing on average by 5.3 dB in patients reporting improvement, and increasing by 4.9 dB in those whose tinnitus remained the same or worsened; iii) 73.9% of patients reporting improvement had their minimal masking level decreased as compared with 50.5% for patients not showing improvement, which is at the level of random change; iv) the type of device used has no significant impact on the treatment outcome and minimal masking level change; v) intensity match and threshold of hearing did not exhibit any significant changes which can be related to treatment outcome. These results are fully consistent with the neurophysiological interpretation of mechanisms involved in the phenomenon of tinnitus and its alleviation.

  3. On SW-minimal models and N=1 supersymmetric quantum Toda-field theories

    International Nuclear Information System (INIS)

    Mallwitz, S.

    1994-04-01

    Integrable N=1 supersymmetric Toda-field theories are determined by a contragredient simple Super-Lie-Algebra (SSLS) with purely fermionic lowering and raising operators. For the SSLA's Osp(3/2) and D(2/1;α) we construct explicitly the higher spin conserved currents and obtain free field representations of the super W-algebras SW(3/2,2) and SW(3/2,3/2,2). In constructing the corresponding series of minimal models using covariant vertex operators, we find a necessary restriction on the Cartan matrix of the SSLA, also for the general case. Within this framework, this restriction claims that there be a minimum of one non-vanishing element on the diagonal of the Cartan matrix. This condition is without parallel in bosonic conformal field theory. As a consequence only two series of SSLA's yield minimal models, namely Osp(2n/2n-1) and Osp(2n/2n+1). Subsequently some general aspects of degenerate representations of SW-algebras, notably the fusion rules, are investigated. As an application we discuss minimal models of SW(3/2, 2), which were constructed with independent methods, in this framework. Covariant formulation is used throughout this paper. (orig.)

  4. Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach

    International Nuclear Information System (INIS)

    Rouzo, H.L.

    1979-01-01

    The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references

  5. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  6. Predictions for m{sub t} and M{sub W} in minimal supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Cavanaugh, R. [Fermi National Accelerator Lab., Batavia, IL (United States); Illinois Univ., Chicago, IL (United States). Dept. of Physics; Roeck, A. de [European Lab. for Particle Physics (CERN), Geneva (Switzerland); Universitaire Instelling Antwerpen, Wilrijk (Belgium); Ellis, J.R. [European Lab. for Particle Physics (CERN), Geneva (Switzerland); Flaecher, H. [Rochester Univ., NY (United States). Dept. of Physics and Astronomy; Heinemeyer, S. [Instituto de Fisica de Cantabria, Santander (Spain); Isidori, G. [INFN, Laboratori Nazionali di Frascati (Italy); Technische Univ. Muenchen (Germany). Inst. for Advanced Study; Olive, K.A. [Minnesota Univ., Minnesota, MN (United States). William I. Fine Theoretical Physics Institute; Ronga, F.J. [ETH Zuerich (Switzerland). Institute for Particle Physics; Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2009-12-15

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m{sub t}, and the W boson mass, m{sub W}. We find that the supersymmetric predictions for both m{sub t} and m{sub W}, obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  7. Phenomenology of minimal Z’ models: from the LHC to the GUT scale

    Directory of Open Access Journals (Sweden)

    Accomando Elena

    2016-01-01

    Full Text Available We consider a class of minimal abelian extensions of the Standard Model with an extra neutral gauge boson Z′ at the TeV scale. In these scenarios an extended scalar sector and heavy right-handed neutrinos are naturally envisaged. We present some of their striking signatures at the Large Hadron Collider, the most interesting arising from a Z′ decaying to heavy neutrino pairs as well as a heavy scalar decaying to two Standard Model Higgses. Using renormalisation group methods, we characterise the high energy behaviours of these extensions and exploit the constraints imposed by the embedding into a wider GUT scenario.

  8. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    Science.gov (United States)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  9. Minimal $R+R^2$ Supergravity Models of Inflation Coupled to Matter

    CERN Document Server

    Ferrara, S

    2014-01-01

    The supersymmetric extension of "Starobinsky" $R+\\alpha R^2$ models of inflation is particularly simple in the "new minimal" formalism of supergravity, where the inflaton has no scalar superpartners. This paper is devoted to matter couplings in such supergravity models. We show how in the new minimal formalism matter coupling presents certain features absent in other formalisms. In particular, for the large class of matter couplings considered in this paper, matter must possess an R-symmetry, which is gauged by the vector field which becomes dynamical in the "new minimal" completion of the $R+\\alpha R^2$ theory. Thus, in the dual formulation of the theory, where the gauge vector is part of a massive vector multiplet, the inflaton is the superpartner of the massive vector of a nonlinearly realized R-symmetry. The F-term potential of this theory is of no-scale type, while the inflaton potential is given by the D-term of the gauged R-symmetry. The absolute minimum of the potential is always exactly supersymmetri...

  10. On the topology of the inflaton field in minimal supergravity models

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Physics Department, Theory Unit, CERN,CH 1211, Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044, Frascati (Italy); Department of Physics and Astronomy, University of California,Los Angeles, CA 90095-1547 (United States); Fré, Pietro [Dipartimento di Fisica, Università di Torino, INFN - Sezione di Torino,via P. Giuria 1, I-10125 Torino (Italy); Sorin, Alexander S. [Bogoliubov Laboratory of Theoretical Physics,and Veksler and Baldin Laboratory of High Energy Physics,Joint Institute for Nuclear Research,141980 Dubna, Moscow Region (Russian Federation)

    2014-04-14

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R+R{sup 2} supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the ((SU(1,1))/(U(1))) space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  11. On the topology of the inflaton field in minimal supergravity models

    Science.gov (United States)

    Ferrara, Sergio; Fré, Pietro; Sorin, Alexander S.

    2014-04-01

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R + R 2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  12. On the Topology of the Inflaton Field in Minimal Supergravity Models

    CERN Document Server

    Ferrara, Sergio; Sorin, Alexander S

    2014-01-01

    We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R+R^2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the SU(1,1)/U(1) space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.

  13. A minimal unified model of disease trajectories captures hallmarks of multiple sclerosis

    KAUST Repository

    Kannan, Venkateshan

    2017-03-29

    Multiple Sclerosis (MS) is an autoimmune disease targeting the central nervous system (CNS) causing demyelination and neurodegeneration leading to accumulation of neurological disability. Here we present a minimal, computational model involving the immune system and CNS that generates the principal subtypes of the disease observed in patients. The model captures several key features of MS, especially those that distinguish the chronic progressive phase from that of the relapse-remitting. In addition, a rare subtype of the disease, progressive relapsing MS naturally emerges from the model. The model posits the existence of two key thresholds, one in the immune system and the other in the CNS, that separate dynamically distinct behavior of the model. Exploring the two-dimensional space of these thresholds, we obtain multiple phases of disease evolution and these shows greater variation than the clinical classification of MS, thus capturing the heterogeneity that is manifested in patients.

  14. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  15. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  16. Sculpting proteins interactively: continual energy minimization embedded in a graphical modeling system.

    Science.gov (United States)

    Surles, M C; Richardson, J S; Richardson, D C; Brooks, F P

    1994-02-01

    We describe a new paradigm for modeling proteins in interactive computer graphics systems--continual maintenance of a physically valid representation, combined with direct user control and visualization. This is achieved by a fast algorithm for energy minimization, capable of real-time performance on all atoms of a small protein, plus graphically specified user tugs. The modeling system, called Sculpt, rigidly constrains bond lengths, bond angles, and planar groups (similar to existing interactive modeling programs), while it applies elastic restraints to minimize the potential energy due to torsions, hydrogen bonds, and van der Waals and electrostatic interactions (similar to existing batch minimization programs), and user-specified springs. The graphical interface can show bad and/or favorable contacts, and individual energy terms can be turned on or off to determine their effects and interactions. Sculpt finds a local minimum of the total energy that satisfies all the constraints using an augmented Lagrange-multiplier method; calculation time increases only linearly with the number of atoms because the matrix of constraint gradients is sparse and banded. On a 100-MHz MIPS R4000 processor (Silicon Graphics Indigo), Sculpt achieves 11 updates per second on a 20-residue fragment and 2 updates per second on an 80-residue protein, using all atoms except non-H-bonding hydrogens, and without electrostatic interactions. Applications of Sculpt are described: to reverse the direction of bundle packing in a designed 4-helix bundle protein, to fold up a 2-stranded beta-ribbon into an approximate beta-barrel, and to design the sequence and conformation of a 30-residue peptide that mimics one partner of a protein subunit interaction. Computer models that are both interactive and physically realistic (within the limitations of a given force field) have 2 significant advantages: (1) they make feasible the modeling of very large changes (such as needed for de novo design), and

  17. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  18. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  19. A Weak Quantum Blind Signature with Entanglement Permutation

    Science.gov (United States)

    Lou, Xiaoping; Chen, Zhigang; Guo, Ying

    2015-09-01

    Motivated by the permutation encryption algorithm, a weak quantum blind signature (QBS) scheme is proposed. It involves three participants, including the sender Alice, the signatory Bob and the trusted entity Charlie, in four phases, i.e., initializing phase, blinding phase, signing phase and verifying phase. In a small-scale quantum computation network, Alice blinds the message based on a quantum entanglement permutation encryption algorithm that embraces the chaotic position string. Bob signs the blinded message with private parameters shared beforehand while Charlie verifies the signature's validity and recovers the original message. Analysis shows that the proposed scheme achieves the secure blindness for the signer and traceability for the message owner with the aid of the authentic arbitrator who plays a crucial role when a dispute arises. In addition, the signature can neither be forged nor disavowed by the malicious attackers. It has a wide application to E-voting and E-payment system, etc.

  20. Symbolic Detection of Permutation and Parity Symmetries of Evolution Equations

    KAUST Repository

    Alghamdi, Moataz

    2017-06-18

    We introduce a symbolic computational approach to detecting all permutation and parity symmetries in any general evolution equation, and to generating associated invariant polynomials, from given monomials, under the action of these symmetries. Traditionally, discrete point symmetries of differential equations are systemically found by solving complicated nonlinear systems of partial differential equations; in the presence of Lie symmetries, the process can be simplified further. Here, we show how to find parity- and permutation-type discrete symmetries purely based on algebraic calculations. Furthermore, we show that such symmetries always form groups, thereby allowing for the generation of new group-invariant conserved quantities from known conserved quantities. This work also contains an implementation of the said results in Mathematica. In addition, it includes, as a motivation for this work, an investigation of the connection between variational symmetries, described by local Lie groups, and conserved quantities in Hamiltonian systems.

  1. Optimization and experimental realization of the quantum permutation algorithm

    Science.gov (United States)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  2. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Min

    2010-09-15

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  3. Searching for beyond the minimal supersymmetric standard model at the laboratory and in the sky

    International Nuclear Information System (INIS)

    Kim, Ju Min

    2010-09-01

    We study the collider signals as well as Dark Matter candidates in supersymmetric models. We show that the collider signatures from a supersymmetric Grand Unification model based on the SO(10) gauge group can be distinguishable from those from the (constrained) minimal supersymmetric Standard Model, even though they share some common features. The N=2 supersymmetry has the characteristically distinct phenomenology, due to the Dirac nature of gauginos, as well as the extra adjoint scalars. We compute the cold Dark Matter relic density including a class of one-loop corrections. Finally, we discuss the detectability of neutralino Dark Matter candidate of the SO(10) model by the direct and indirect Dark Matter search experiments. (orig.)

  4. Inverse modelling and pulsating torque minimization of salient pole non-sinusoidal synchronous machines

    Energy Technology Data Exchange (ETDEWEB)

    Ait-gougam, Y.; Ibtiouen, R.; Touhami, O. [Laboratoire de Recherche en Electrotechnique, Ecole Nationale Polytechnique, BP 182, El-Harrach 16200 (Algeria); Louis, J.-P.; Gabsi, M. [Systemes et Applications des Technologies de l' Information et de l' Energie (SATIE), CNRS UMR 8029, Ecole Normale Superieure de Cachan, 61 Avenue du President Wilson, 94235 Cachan Cedex (France)

    2008-01-15

    Sinusoidal motor's mathematical models are usually obtained using classical d-q transformation in the case of salient pole synchronous motors having sinusoidal field distribution. In this paper, a new inverse modelling for synchronous motors is presented. This modelling is derived from the properties of constant torque curves in the Concordia's reference frame. It takes into account the non-sinusoidal field distribution; EMF, self and mutual inductances having non-sinusoidal variations with respect to the angular rotor position. Both copper losses and torque ripples are minimized by adapted currents waveforms calculated from this model. Experimental evaluation was carried out on a DSP-controlled PMSM drive platform. Test results obtained demonstrate the effectiveness of the proposed method in reducing torque ripple. (author)

  5. A minimal supersymmetric model of particle physics and the early universe

    International Nuclear Information System (INIS)

    Buchmueller, W.; Domcke, V.; Kamada, K.; Schmitz, K.

    2013-11-01

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local B-L, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken B-L symmetry, which ends in tachyonic preheating, i.e. the decay of the false vacuum, followed by a matter dominated phase with heavy B-L Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of gravitational waves.

  6. A minimal supersymmetric model of particle physics and the early universe

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, W.; Domcke, V.; Kamada, K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Schmitz, K. [Tokyo Univ., Kashiwa (Japan). Kavli IPMU, TODIAS

    2013-11-15

    We consider a minimal supersymmetric extension of the Standard Model, with right-handed neutrinos and local B-L, the difference between baryon and lepton number, a symmetry which is spontaneously broken at the scale of grand unification. To a large extent, the parameters of the model are determined by gauge and Yukawa couplings of quarks and leptons. We show that this minimal model can successfully account for the earliest phases of the cosmological evolution: Inflation is driven by the energy density of a false vacuum of unbroken B-L symmetry, which ends in tachyonic preheating, i.e. the decay of the false vacuum, followed by a matter dominated phase with heavy B-L Higgs bosons. Nonthermal and thermal processes produce an abundance of heavy neutrinos whose decays generate primordial entropy, baryon asymmetry via leptogenesis and dark matter consisting of gravitinos or nonthermal WIMPs. The model predicts relations between neutrino and superparticle masses and a characteristic spectrum of gravitational waves.

  7. Generalized permutation symmetry and the flavour problem in SU(2)sub(L)xU(1)

    International Nuclear Information System (INIS)

    Ecker, G.

    1984-01-01

    A generalized permutation group is introduced as a possible horizontal symmetry for SU(2)sub(L)xU(1) gauge theories. It leads to the unique two generation quark mass matrices with a correct prediction for the Cabibbo angle. For three generations the model exhibits spontaneous CP violation, correlates the Kobayashi-Maskawa mixing parameters s 1 and s 3 and predicts an upper bound for the running top quark mass of approximately 45 GeV. The hierarchy of generations is due to a hierarchy of vacuum expectation values rather than of Yukawa coupling constants. (orig.)

  8. Information sets as permutation cycles for quadratic residue codes

    Directory of Open Access Journals (Sweden)

    Richard A. Jenson

    1982-01-01

    Full Text Available The two cases p=7 and p=23 are the only known cases where the automorphism group of the [p+1,   (p+1/2] extended binary quadratic residue code, O(p, properly contains PSL(2,p. These codes have some of their information sets represented as permutation cycles from Aut(Q(p. Analysis proves that all information sets of Q(7 are so represented but those of Q(23 are not.

  9. Successful attack on permutation-parity-machine-based neural cryptography.

    Science.gov (United States)

    Seoane, Luís F; Ruttor, Andreas

    2012-02-01

    An algorithm is presented which implements a probabilistic attack on the key-exchange protocol based on permutation parity machines. Instead of imitating the synchronization of the communicating partners, the strategy consists of a Monte Carlo method to sample the space of possible weights during inner rounds and an analytic approach to convey the extracted information from one outer round to the next one. The results show that the protocol under attack fails to synchronize faster than an eavesdropper using this algorithm.

  10. A chronicle of permutation statistical methods 1920–2000, and beyond

    CERN Document Server

    Berry, Kenneth J; Mielke Jr , Paul W

    2014-01-01

    The focus of this book is on the birth and historical development of permutation statistical methods from the early 1920s to the near present. Beginning with the seminal contributions of R.A. Fisher, E.J.G. Pitman, and others in the 1920s and 1930s, permutation statistical methods were initially introduced to validate the assumptions of classical statistical methods. Permutation methods have advantages over classical methods in that they are optimal for small data sets and non-random samples, are data-dependent, and are free of distributional assumptions. Permutation probability values may be exact, or estimated via moment- or resampling-approximation procedures. Because permutation methods are inherently computationally-intensive, the evolution of computers and computing technology that made modern permutation methods possible accompanies the historical narrative. Permutation analogs of many well-known statistical tests are presented in a historical context, including multiple correlation and regression, ana...

  11. Sorting signed permutations by inversions in O(nlogn) time.

    Science.gov (United States)

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  12. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  13. The quantum group structure of 2D gravity and minimal models. Pt. 1

    International Nuclear Information System (INIS)

    Gervais, J.L.

    1990-01-01

    On the unit circle, an infinite family of chiral operators is constructed, whose exchange algebra is given by the universal R-matrix of the quantum group SL(2) q . This establishes the precise connection between the chiral algebra of two dimensional gravity or minimal models and this quantum group. The method is to relate the monodromy properties of the operator differential equations satisfied by the generalized vertex operators with the exchange algebra of SL(2) q . The formulae so derived, which generalize an earlier particular case worked out by Babelon, are remarkably compact and may be entirely written in terms of 'q-deformed' factorials and binomial coefficients. (orig.)

  14. Connecting Dirac and Majorana neutrino mass matrices in the minimal left-right symmetric model.

    Science.gov (United States)

    Nemevšek, Miha; Senjanović, Goran; Tello, Vladimir

    2013-04-12

    Probing the origin of neutrino mass by disentangling the seesaw mechanism is one of the central issues of particle physics. We address it in the minimal left-right symmetric model and show how the knowledge of light and heavy neutrino masses and mixings suffices to determine their Dirac Yukawa couplings. This in turn allows one to make predictions for a number of high and low energy phenomena, such as decays of heavy neutrinos, neutrinoless double beta decay, electric dipole moments of charged leptons, and neutrino transition moments. We also discuss a way of reconstructing the neutrino Dirac Yukawa couplings at colliders such as the LHC.

  15. Adiabatic density perturbations and matter generation from the minimal supersymmetric standard model.

    Science.gov (United States)

    Enqvist, Kari; Kasuya, Shinta; Mazumdar, Anupam

    2003-03-07

    We propose that the inflaton is coupled to ordinary matter only gravitationally and that it decays into a completely hidden sector. In this scenario both baryonic and dark matter originate from the decay of a flat direction of the minimal supersymmetric standard model, which is shown to generate the desired adiabatic perturbation spectrum via the curvaton mechanism. The requirement that the energy density along the flat direction dominates over the inflaton decay products fixes the flat direction almost uniquely. The present residual energy density in the hidden sector is typically shown to be small.

  16. Casimir effect at finite temperature for pure-photon sector of the minimal Standard Model Extension

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.F., E-mail: alesandroferreira@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900, Cuiabá, Mato Grosso (Brazil); Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada); Khanna, Faqir C., E-mail: khannaf@uvic.ca [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road Victoria, BC (Canada)

    2016-12-15

    Dynamics between particles is governed by Lorentz and CPT symmetry. There is a violation of Parity (P) and CP symmetry at low levels. The unified theory, that includes particle physics and quantum gravity, may be expected to be covariant with Lorentz and CPT symmetry. At high enough energies, will the unified theory display violation of any symmetry? The Standard Model Extension (SME), with Lorentz and CPT violating terms, has been suggested to include particle dynamics. The minimal SME in the pure photon sector is considered in order to calculate the Casimir effect at finite temperature.

  17. Right-handed quark mixings in minimal left-right symmetric model with general CP violation

    International Nuclear Information System (INIS)

    Zhang Yue; Ji Xiangdong; An Haipeng; Mohapatra, R. N.

    2007-01-01

    We solve systematically for the right-handed quark mixings in the minimal left-right symmetric model which generally has both explicit and spontaneous CP violations. The leading-order result has the same hierarchical structure as the left-handed Cabibbo-Kobayashi-Maskawa mixing, but with additional CP phases originating from a spontaneous CP-violating phase in the Higgs vacuum expectation values. We explore the phenomenology entailed by the new right-handed mixing matrix, particularly the bounds on the mass of W R and the CP phase of the Higgs vacuum expectation values

  18. Fock model and Segal-Bargmann transform for minimal representations of Hermitian Lie groups

    DEFF Research Database (Denmark)

    Hilgert, Joachim; Kobayashi, Toshiyuki; Möllers, Jan

    2012-01-01

    For any Hermitian Lie group G of tube type we construct a Fock model of its minimal representation. The Fock space is defined on the minimal nilpotent K_C-orbit X in p_C and the L^2-inner product involves a K-Bessel function as density. Here K is a maximal compact subgroup of G, and g......_C=k_C+p_C is a complexified Cartan decomposition. In this realization the space of k-finite vectors consists of holomorphic polynomials on X. The reproducing kernel of the Fock space is calculated explicitly in terms of an I-Bessel function. We further find an explicit formula of a generalized Segal-Bargmann transform which...... intertwines the Schroedinger and Fock model. Its kernel involves the same I-Bessel function. Using the Segal--Bargmann transform we also determine the integral kernel of the unitary inversion operator in the Schroedinger model which is given by a J-Bessel function....

  19. Simulated lumbar minimally invasive surgery educational model with didactic and technical components.

    Science.gov (United States)

    Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James

    2013-10-01

    The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.

  20. Flaxion: a minimal extension to solve puzzles in the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ema, Yohei [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU),University of Tokyo, Kashiwa 277-8583 (Japan)

    2017-01-23

    We propose a minimal extension of the standard model which includes only one additional complex scalar field, flavon, with flavor-dependent global U(1) symmetry. It not only explains the hierarchical flavor structure in the quark and lepton sector (including neutrino sector), but also solves the strong CP problem by identifying the CP-odd component of the flavon as the QCD axion, which we call flaxion. Furthermore, the flaxion model solves the cosmological puzzles in the standard model, i.e., origin of dark matter, baryon asymmetry of the universe, and inflation. We show that the radial component of the flavon can play the role of inflaton without isocurvature nor domain wall problems. The dark matter abundance can be explained by the flaxion coherent oscillation, while the baryon asymmetry of the universe is generated through leptogenesis.

  1. Dark matter constraints in the minimal and nonminimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Stephan, A.

    1998-01-01

    We determine the allowed parameter space and the particle spectra of the minimal SUSY standard model (MSSM) and nonminimal SUSY standard model (NMSSM) imposing correct electroweak gauge symmetry breaking and recent experimental constraints. The parameters of the models are evolved with the SUSY renormalization group equations assuming universality at the grand unified scale. Applying the new unbounded from below constraints we can exclude the lightest SUSY particle singlinos and light scalar and pseudoscalar Higgs singlets of the NMSSM. This exclusion removes the experimental possibility to distinguish between the MSSM and NMSSM via the recently proposed search for an additional cascade produced in the decay of the B-ino into the LSP singlino. Furthermore, the effects of the dark matter condition for the MSSM and NMSSM are investigated and the differences concerning the parameter space, the SUSY particle, and Higgs sector are discussed. thinsp copyright 1998 The American Physical Society

  2. Mathematical models for a batch scheduling problem to minimize earliness and tardiness

    Directory of Open Access Journals (Sweden)

    Basar Ogun

    2018-05-01

    Full Text Available Purpose: Today’s manufacturing facilities are challenged by highly customized products and just in time manufacturing and delivery of these products. In this study, a batch scheduling problem is addressed to provide on-time completion of customer orders in the environment of lean manufacturing. The problem is to optimize partitioning of product components into batches and scheduling of the resulting batches where each customer order is received as a set of products made of various components. Design/methodology/approach: Three different mathematical models for minimization of total earliness and tardiness of customer orders are developed to provide on-time completion of customer orders and also, to avoid from inventory of final products. The first model is a non-linear integer programming model while the second is a linearized version of the first. Finally, to solve larger sized instances of the problem, an alternative linear integer model is presented. Findings: Computational study using a suit set of test instances showed that the alternative linear integer model is able to solve all test instances in varying sizes within quite shorter computer times comparing to the other two models. It was also showed that the alternative model can solve moderate sized real-world problems. Originality/value: The problem under study differentiates from existing batch scheduling problems in the literature since it includes new circumstances which may arise in real-world applications. This research, also, contributes the literature of batch scheduling problem by presenting new optimization models.

  3. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    Science.gov (United States)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  4. A predictive model of suitability for minimally invasive parathyroid surgery in the treatment of primary hyperparathyroidism [corrected].

    LENUS (Irish Health Repository)

    Kavanagh, Dara O

    2012-05-01

    Improved preoperative localizing studies have facilitated minimally invasive approaches in the treatment of primary hyperparathyroidism (PHPT). Success depends on the ability to reliably select patients who have PHPT due to single-gland disease. We propose a model encompassing preoperative clinical, biochemical, and imaging studies to predict a patient\\'s suitability for minimally invasive surgery.

  5. Minimally Disruptive Medicine: A Pragmatically Comprehensive Model for Delivering Care to Patients with Multiple Chronic Conditions

    Directory of Open Access Journals (Sweden)

    Aaron L. Leppin

    2015-01-01

    Full Text Available An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice.

  6. A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach

    Directory of Open Access Journals (Sweden)

    Dai Owaki

    2017-06-01

    Full Text Available Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns.

  7. Warm inflation with an oscillatory inflaton in the non-minimal kinetic coupling model

    International Nuclear Information System (INIS)

    Goodarzi, Parviz; Sadjadi, H.M.

    2017-01-01

    In the cold inflation scenario, the slow roll inflation and reheating via coherent rapid oscillation, are usually considered as two distinct eras. When the slow roll ends, a rapid oscillation phase begins and the inflaton decays to relativistic particles reheating the Universe. In another model dubbed warm inflation, the rapid oscillation phase is suppressed, and we are left with only a slow roll period during which the reheating occurs. Instead, in this paper, we propose a new picture for inflation in which the slow roll era is suppressed and only the rapid oscillation phase exists. Radiation generation during this era is taken into account, so we have warm inflation with an oscillatory inflaton. To provide enough e-folds, we employ the non-minimal derivative coupling model. We study the cosmological perturbations and compute the temperature at the end of warm oscillatory inflation. (orig.)

  8. A minimal model of epithelial tissue dynamics and its application to the corneal epithelium

    Science.gov (United States)

    Henkes, Silke; Matoz-Fernandez, Daniel; Kostanjevec, Kaja; Coburn, Luke; Sknepnek, Rastko; Collinson, J. Martin; Martens, Kirsten

    Epithelial cell sheets are characterized by a complex interplay of active drivers, including cell motility, cell division and extrusion. Here we construct a particle-based minimal model tissue with only division/death dynamics and show that it always corresponds to a liquid state with a single dynamic time scale set by the division rate, and that no glassy phase is possible. Building on this, we construct an in-silico model of the mammalian corneal epithelium as such a tissue confined to a hemisphere bordered by the limbal stem cell zone. With added cell motility dynamics we are able to explain the steady-state spiral migration on the cornea, including the central vortex defect, and quantitatively compare it to eyes obtained from mice that are X-inactivation mosaic for LacZ.

  9. Warm inflation with an oscillatory inflaton in the non-minimal kinetic coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Goodarzi, Parviz [University of Ayatollah Ozma Borujerdi, Department of Science, Boroujerd (Iran, Islamic Republic of); Sadjadi, H.M. [University of Tehran, Department of Physics, Tehran (Iran, Islamic Republic of)

    2017-07-15

    In the cold inflation scenario, the slow roll inflation and reheating via coherent rapid oscillation, are usually considered as two distinct eras. When the slow roll ends, a rapid oscillation phase begins and the inflaton decays to relativistic particles reheating the Universe. In another model dubbed warm inflation, the rapid oscillation phase is suppressed, and we are left with only a slow roll period during which the reheating occurs. Instead, in this paper, we propose a new picture for inflation in which the slow roll era is suppressed and only the rapid oscillation phase exists. Radiation generation during this era is taken into account, so we have warm inflation with an oscillatory inflaton. To provide enough e-folds, we employ the non-minimal derivative coupling model. We study the cosmological perturbations and compute the temperature at the end of warm oscillatory inflation. (orig.)

  10. Horizontal, anomalous U(1) symmetry for the more minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Nelson, A.E.; Wright, D.

    1997-01-01

    We construct explicit examples with a horizontal, open-quotes anomalousclose quotes U(1) gauge group, which, in a supersymmetric extension of the standard model, reproduce qualitative features of the fermion spectrum and CKM matrix, and suppress FCNC and proton decay rates without the imposition of global symmetries. We review the motivation for such open-quotes moreclose quotes minimal supersymmetric standard models and their predictions for the sparticle spectrum. There is a mass hierarchy in the scalar sector which is the inverse of the fermion mass hierarchy. We show in detail why ΔS=2 FCNCs are greatly suppressed when compared with naive estimates for nondegenerate squarks. copyright 1997 The American Physical Society

  11. Dynamics of symmetry breaking during quantum real-time evolution in a minimal model system.

    Science.gov (United States)

    Heyl, Markus; Vojta, Matthias

    2014-10-31

    One necessary criterion for the thermalization of a nonequilibrium quantum many-particle system is ergodicity. It is, however, not sufficient in cases where the asymptotic long-time state lies in a symmetry-broken phase but the initial state of nonequilibrium time evolution is fully symmetric with respect to this symmetry. In equilibrium, one particular symmetry-broken state is chosen as a result of an infinitesimal symmetry-breaking perturbation. From a dynamical point of view the question is: Can such an infinitesimal perturbation be sufficient for the system to establish a nonvanishing order during quantum real-time evolution? We study this question analytically for a minimal model system that can be associated with symmetry breaking, the ferromagnetic Kondo model. We show that after a quantum quench from a completely symmetric state the system is able to break its symmetry dynamically and discuss how these features can be observed experimentally.

  12. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  13. Spreading Speed, Traveling Waves, and Minimal Domain Size in Impulsive Reaction–Diffusion Models

    KAUST Repository

    Lewis, Mark A.

    2012-08-15

    How growth, mortality, and dispersal in a species affect the species\\' spread and persistence constitutes a central problem in spatial ecology. We propose impulsive reaction-diffusion equation models for species with distinct reproductive and dispersal stages. These models can describe a seasonal birth pulse plus nonlinear mortality and dispersal throughout the year. Alternatively, they can describe seasonal harvesting, plus nonlinear birth and mortality as well as dispersal throughout the year. The population dynamics in the seasonal pulse is described by a discrete map that gives the density of the population at the end of a pulse as a possibly nonmonotone function of the density of the population at the beginning of the pulse. The dynamics in the dispersal stage is governed by a nonlinear reaction-diffusion equation in a bounded or unbounded domain. We develop a spatially explicit theoretical framework that links species vital rates (mortality or fecundity) and dispersal characteristics with species\\' spreading speeds, traveling wave speeds, as well as minimal domain size for species persistence. We provide an explicit formula for the spreading speed in terms of model parameters, and show that the spreading speed can be characterized as the slowest speed of a class of traveling wave solutions. We also give an explicit formula for the minimal domain size using model parameters. Our results show how the diffusion coefficient, and the combination of discrete- and continuous-time growth and mortality determine the spread and persistence dynamics of the population in a wide variety of ecological scenarios. Numerical simulations are presented to demonstrate the theoretical results. © 2012 Society for Mathematical Biology.

  14. On the Higgs-like boson in the minimal supersymmetric 3-3-1 model

    Science.gov (United States)

    Ferreira, J. G.; Pires, C. A. de S.; da Silva, P. S. Rodrigues; Siqueira, Clarissa

    2018-03-01

    It is imperative that any proposal of new physics beyond the standard model possesses a Higgs-like boson with 125 GeV of mass and couplings with the standard particles that recover the branching ratios and signal strengths as measured by CMS and ATLAS. We address this issue within the supersymmetric version of the minimal 3-3-1 model. For this we develop the Higgs potential with focus on the lightest Higgs provided by the model. Our proposal is to verify if it recovers the properties of the Standard Model Higgs. With respect to its mass, we calculate it up to one loop level by taking into account all contributions provided by the model. In regard to its couplings, we restrict our investigation to couplings of the Higgs-like boson with the standard particles, only. We then calculate the dominant branching ratios and the respective signal strengths and confront our results with the recent measurements of CMS and ATLAS. As distinctive aspects, we remark that our Higgs-like boson intermediates flavor changing neutral processes and has as signature the decay t → h+c. We calculate its branching ratio and compare it with current bounds. We also show that the Higgs potential of the model is stable for the region of parameter space employed in our calculations.

  15. Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

    International Nuclear Information System (INIS)

    Teuber, T; Steidl, G; Chan, R H

    2013-01-01

    In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)

  16. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Kaixuan, E-mail: kaixuanxubjtu@yeah.net; Wang, Jun

    2017-02-26

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  17. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    International Nuclear Information System (INIS)

    Xu, Kaixuan; Wang, Jun

    2017-01-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  18. Minimal agent based model for financial markets II. Statistical properties of the linear and multiplicative dynamics

    Science.gov (United States)

    Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.

    2009-02-01

    We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.

  19. Multiple travelling-wave solutions in a minimal model for cell motility

    KAUST Repository

    Kimpton, L. S.

    2012-07-11

    Two-phase flow models have been used previously to model cell motility. In order to reduce the complexity inherent with describing the many physical processes, we formulate a minimal model. Here we demonstrate that even the simplest 1D, two-phase, poroviscous, reactive flow model displays various types of behaviour relevant to cell crawling. We present stability analyses that show that an asymmetric perturbation is required to cause a spatially uniform, stationary strip of cytoplasm to move, which is relevant to cell polarization. Our numerical simulations identify qualitatively distinct families of travellingwave solutions that coexist at certain parameter values. Within each family, the crawling speed of the strip has a bell-shaped dependence on the adhesion strength. The model captures the experimentally observed behaviour that cells crawl quickest at intermediate adhesion strengths, when the substrate is neither too sticky nor too slippy. © The Author 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  20. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    Science.gov (United States)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  1. Construct Validity of Fresh Frozen Human Cadaver as a Training Model in Minimal Access Surgery

    Science.gov (United States)

    Macafee, David; Pranesh, Nagarajan; Horgan, Alan F.

    2012-01-01

    Background: The construct validity of fresh human cadaver as a training tool has not been established previously. The aims of this study were to investigate the construct validity of fresh frozen human cadaver as a method of training in minimal access surgery and determine if novices can be rapidly trained using this model to a safe level of performance. Methods: Junior surgical trainees, novices (cadavers. Expert laparoscopists (>100 laparoscopic procedures) performed 3 repetitions of identical tasks. Performances were scored using a validated, objective Global Operative Assessment of Laparoscopic Skills scale. Scores for 3 consecutive repetitions were compared between experts and novices to determine construct validity. Furthermore, to determine if the novices reached a safe level, a trimmed mean of the experts score was used to define a benchmark. Mann-Whitney U test was used for construct validity analysis and 1-sample t test to compare performances of the novice group with the benchmark safe score. Results: Ten novices and 2 experts were recruited. Four out of 5 tasks (nondominant to dominant hand transfer; simulated appendicectomy; intracorporeal and extracorporeal knot tying) showed construct validity. Novices’ scores became comparable to benchmark scores between the eighth and tenth repetition. Conclusion: Minimal access surgical training using fresh frozen human cadavers appears to have construct validity. The laparoscopic skills of novices can be accelerated through to a safe level within 8 to 10 repetitions. PMID:23318058

  2. Develop of a model to minimize and to treat waste coming from the chemical laboratories

    International Nuclear Information System (INIS)

    Chacon Hernandez, M.

    2000-01-01

    They were investigated and proposed alternative of minimization and treatment of waste organic type coming from chemical laboratories, considering as alternative the disposition for the drainage, the chemical treatment of the waste, the disposition in sanitary fillers, the creation of a cellar to recycle material, the incineration, the distillation and the possibility to establish an agreement with the company Cements INCSA to discard the materials in the oven to cements of this enterprise. the methodology had as first stage the summary of information about the production of residuals for Investigation Center or Academic Unit. For this they were considered the laboratories of investigation of the CICA, CELEQ, CIPRONA, LAYAFA, and the laboratories of teaching of the sections of Organic Chemistry, Inorganic Chemistry, Physicochemical, Pharmacognosy, Drugs Analysis, Physicopharmacy, Histology and Physiology. Additionally, you considers the office of purveyor of the Microbiology School. Subsequently one carries out an analysis of costs to determine which waste constituted most of the waste generated by the University, as for cost and volume. Then, they were carried out classifications of the materials according to chemical approaches, classification of the NFPA and for data of combustion heats. Once carried out this classification and established the current situation of the laboratories considered as for handling and treatment of waste, they proceeded to evaluate and select treatment options and disposition of waste considering advantages and disadvantages as for implementation possibility and cost stops this way a minimization model and treatment that it can be implemented in the University to settle down [es

  3. Quantile-based permutation thresholds for quantitative trait loci hotspots.

    Science.gov (United States)

    Neto, Elias Chaibub; Keller, Mark P; Broman, Andrew F; Attie, Alan D; Jansen, Ritsert C; Broman, Karl W; Yandell, Brian S

    2012-08-01

    Quantitative trait loci (QTL) hotspots (genomic locations affecting many traits) are a common feature in genetical genomics studies and are biologically interesting since they may harbor critical regulators. Therefore, statistical procedures to assess the significance of hotspots are of key importance. One approach, randomly allocating observed QTL across the genomic locations separately by trait, implicitly assumes all traits are uncorrelated. Recently, an empirical test for QTL hotspots was proposed on the basis of the number of traits that exceed a predetermined LOD value, such as the standard permutation LOD threshold. The permutation null distribution of the maximum number of traits across all genomic locations preserves the correlation structure among the phenotypes, avoiding the detection of spurious hotspots due to nongenetic correlation induced by uncontrolled environmental factors and unmeasured variables. However, by considering only the number of traits above a threshold, without accounting for the magnitude of the LOD scores, relevant information is lost. In particular, biologically interesting hotspots composed of a moderate to small number of traits with strong LOD scores may be neglected as nonsignificant. In this article we propose a quantile-based permutation approach that simultaneously accounts for the number and the LOD scores of traits within the hotspots. By considering a sliding scale of mapping thresholds, our method can assess the statistical significance of both small and large hotspots. Although the proposed approach can be applied to any type of heritable high-volume "omic" data set, we restrict our attention to expression (e)QTL analysis. We assess and compare the performances of these three methods in simulations and we illustrate how our approach can effectively assess the significance of moderate and small hotspots with strong LOD scores in a yeast expression data set.

  4. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  5. Targeting the minimal supersymmetric standard model with the compact muon solenoid experiment

    Science.gov (United States)

    Bein, Samuel Louis

    An interpretation of CMS searches for evidence of supersymmetry in the context of the minimal supersymmetric Standard Model (MSSM) is given. It is found that supersymmetric particles with color charge are excluded in the mass range below about 400 GeV, but neutral and weakly-charged sparticles remain non-excluded in all mass ranges. Discussion of the non-excluded regions of the model parameter space is given, including details on the strengths and weaknesses of existing searches, and recommendations for future analysis strategies. Advancements in the modeling of events arising from quantum chromodynamics and electroweak boson production, which are major backgrounds in searches for new physics at the LHC, are also presented. These methods have been implemented as components of CMS searches for supersymmetry in proton-proton collisions resulting in purely hadronic events (i.e., events with no identified leptons) at a center of momentum energy of 13 TeV. These searches, interpreted in the context of simplified models, exclude supersymmetric gluons (gluinos) up to masses of 1400 to 1600 GeV, depending on the model considered, and exclude scalar top quarks with masses up to about 800 GeV, assuming a massless lightest supersymmetric particle. A search for non-excluded supersymmetry models is also presented, which uses multivariate discriminants to isolate potential signal candidate events. The search achieves sensitivity to new physics models in background-dominated kinematic regions not typically considered by analyses, and rules out supersymmetry models that survived 7 and 8 TeV searches performed by CMS.

  6. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    Science.gov (United States)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  7. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Das, Kumar; Dutta, Koushik [Theory Division, Saha Institute of Nuclear Physics, 1/AF Saltlake, Kolkata 700064 (India); Domcke, Valerie, E-mail: kumar.das@saha.ac.in, E-mail: valerie.domcke@apc.univ-paris7.fr, E-mail: koushik.dutta@saha.ac.in [AstroParticule et Cosmologie (APC), Paris Centre for Cosmological Physics (PCCP), Université Paris Diderot, 75013 Paris (France)

    2017-03-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes along the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.

  8. Supergravity contributions to inflation in models with non-minimal coupling to gravity

    International Nuclear Information System (INIS)

    Das, Kumar; Dutta, Koushik; Domcke, Valerie

    2017-01-01

    This paper provides a systematic study of supergravity contributions relevant for inflationary model building in Jordan frame supergravity. In this framework, canonical kinetic terms in the Jordan frame result in the separation of the Jordan frame scalar potential into a tree-level term and a supergravity contribution which is potentially dangerous for sustaining inflation. We show that if the vacuum energy necessary for driving inflation originates dominantly from the F-term of an auxiliary field (i.e. not the inflaton), the supergravity corrections to the Jordan frame scalar potential are generically suppressed. Moreover, these supergravity contributions identically vanish if the superpotential vanishes along the inflationary trajectory. On the other hand, if the F-term associated with the inflaton dominates the vacuum energy, the supergravity contributions are generically comparable to the globally supersymmetric contributions. In addition, the non-minimal coupling to gravity inherent to Jordan frame supergravity significantly impacts the inflationary model depending on the size and sign of this coupling. We discuss the phenomenology of some representative inflationary models, and point out the relation to the recently much discussed cosmological 'attractor' models.

  9. Bulk-boundary correlators in the hermitian matrix model and minimal Liouville gravity

    International Nuclear Information System (INIS)

    Bourgine, Jean-Emile; Ishiki, Goro; Rim, Chaiho

    2012-01-01

    We construct the one matrix model (MM) correlators corresponding to the general bulk-boundary correlation numbers of the minimal Liouville gravity (LG) on the disc. To find agreement between both discrete and continuous approach, we investigate the resonance transformation mixing boundary and bulk couplings. It leads to consider two sectors, depending on whether the matter part of the LG correlator is vanishing due to the fusion rules. In the vanishing case, we determine the explicit transformation of the boundary couplings at the first order in bulk couplings. In the non-vanishing case, no bulk-boundary resonance is involved and only the first order of pure boundary resonances have to be considered. Those are encoded in the matrix polynomials determined in our previous paper. We checked the agreement for the bulk-boundary correlators of MM and LG in several non-trivial cases. In this process, we developed an alternative method to derive the boundary resonance encoding polynomials.

  10. Sterile neutrino in a minimal three-generation see-saw model

    Indian Academy of Sciences (India)

    Sterile neutrino in a minimal three-generation see-saw model. Table 1. Relevant right-handed fermion and scalar fields and their transformation properties. Here we have defined Y. I3R· (B–L)/2. SU´2µL ¢U´1µI3R ¢U´1µB L. SU´2µL ¢UY ´1µ. Le ·Lµ Lτ. Seµ. 2R ν R. (1,1/2, 1). (1,0). 1. 1 ν·R. (1,1/2, 1). (1,0). 1. 1. ντR. (1, 1/2, 1).

  11. Modification of Schrödinger-Newton equation due to braneworld models with minimal length

    Science.gov (United States)

    Bhat, Anha; Dey, Sanjib; Faizal, Mir; Hou, Chenguang; Zhao, Qin

    2017-07-01

    We study the correction of the energy spectrum of a gravitational quantum well due to the combined effect of the braneworld model with infinite extra dimensions and generalized uncertainty principle. The correction terms arise from a natural deformation of a semiclassical theory of quantum gravity governed by the Schrödinger-Newton equation based on a minimal length framework. The two fold correction in the energy yields new values of the spectrum, which are closer to the values obtained in the GRANIT experiment. This raises the possibility that the combined theory of the semiclassical quantum gravity and the generalized uncertainty principle may provide an intermediate theory between the semiclassical and the full theory of quantum gravity. We also prepare a schematic experimental set-up which may guide to the understanding of the phenomena in the laboratory.

  12. From b → sγ to the LSP detection rates in minimal string unification models

    International Nuclear Information System (INIS)

    Khalil, S.; Masiero, A.; Shafi, Q.

    1997-04-01

    We exploit the measured branching ratio for b → sγ to derive lower limits on the sparticle and Higgs masses in the minimal string unification models. For the LSP ('bino'), chargino and the lightest Higgs, these turn out to be 50, 90 and 75 GeV respectively. Taking account of the upper bounds on the mass spectrum from the LSP relic abundance, we estimate the direct detection rate for the latter to vary from 10 -1 to 10 -4 events/kg/day. The muon flux, produced by neutrinos from the annihilating LSP's, varies in the range 10 -2 - 10 -9 muons/m 2 /day. (author). 26 refs, 9 figs

  13. Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model

    Science.gov (United States)

    Niehoff, Christoph; Stangl, Peter; Straub, David M.

    2017-04-01

    We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5) ≅ SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current exper-imental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, CP violation in B s mixing, and the electric dipole moment of the neutron.

  14. Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model

    Energy Technology Data Exchange (ETDEWEB)

    Niehoff, Christoph; Stangl, Peter; Straub, David M. [Excellence Cluster Universe, TUM,Boltzmannstr. 2, 85748 Garching (Germany)

    2017-04-20

    We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5)≅SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current experimental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, C P violation in B{sub s} mixing, and the electric dipole moment of the neutron.

  15. Non-unitary neutrino mixing and CP violation in the minimal inverse seesaw model

    International Nuclear Information System (INIS)

    Malinsky, Michal; Ohlsson, Tommy; Xing, Zhi-zhong; Zhang He

    2009-01-01

    We propose a simplified version of the inverse seesaw model, in which only two pairs of the gauge-singlet neutrinos are introduced, to interpret the observed neutrino mass hierarchy and lepton flavor mixing at or below the TeV scale. This 'minimal' inverse seesaw scenario (MISS) is technically natural and experimentally testable. In particular, we show that the effective parameters describing the non-unitary neutrino mixing matrix are strongly correlated in the MISS, and thus, their upper bounds can be constrained by current experimental data in a more restrictive way. The Jarlskog invariants of non-unitary CP violation are calculated, and the discovery potential of such new CP-violating effects in the near detector of a neutrino factory is discussed.

  16. Higgs phenomenology in the minimal S U (3 )L×U (1 )X model

    Science.gov (United States)

    Okada, Hiroshi; Okada, Nobuchika; Orikasa, Yuta; Yagyu, Kei

    2016-07-01

    We investigate the phenomenology of a model based on the S U (3 )c×S U (3 )L×U (1 )X gauge theory, the so-called 331 model. In particular, we focus on the Higgs sector of the model which is composed of three S U (3 )L triplet Higgs fields and is the minimal form for realizing a phenomenologically acceptable scenario. After the spontaneous symmetry breaking S U (3 )L×U (1 )X→S U (2 )L×U (1 )Y , our Higgs sector effectively becomes that with two S U (2 )L doublet scalar fields, in which the first- and the second-generation quarks couple to a different Higgs doublet from that which couples to the third-generation quarks. This structure causes the flavor-changing neutral current mediated by Higgs bosons at the tree level. By taking an alignment limit of the mass matrix for the C P -even Higgs bosons, which is naturally realized in the case with the breaking scale of S U (3 )L×U (1 )X much larger than that of S U (2 )L×U (1 )Y, we can avoid current constraints from flavor experiments such as the B0-B¯ 0 mixing even for the Higgs bosons masses that are O (100 ) GeV . In this allowed parameter space, we clarify that a characteristic deviation in quark Yukawa couplings of the Standard Model-like Higgs boson is predicted, which has a different pattern from that seen in two Higgs doublet models with a softly broken Z2 symmetry. We also find that the flavor-violating decay modes of the extra Higgs boson, e.g., H /A →t c and H±→t s , can be dominant, and they yield the important signature to distinguish our model from the two Higgs doublet models.

  17. A Hybrid ACO Approach to the Matrix Bandwidth Minimization Problem

    Science.gov (United States)

    Pintea, Camelia-M.; Crişan, Gloria-Cerasela; Chira, Camelia

    The evolution of the human society raises more and more difficult endeavors. For some of the real-life problems, the computing time-restriction enhances their complexity. The Matrix Bandwidth Minimization Problem (MBMP) seeks for a simultaneous permutation of the rows and the columns of a square matrix in order to keep its nonzero entries close to the main diagonal. The MBMP is a highly investigated {NP}-complete problem, as it has broad applications in industry, logistics, artificial intelligence or information recovery. This paper describes a new attempt to use the Ant Colony Optimization framework in tackling MBMP. The introduced model is based on the hybridization of the Ant Colony System technique with new local search mechanisms. Computational experiments confirm a good performance of the proposed algorithm for the considered set of MBMP instances.

  18. Phenomenological study of the minimal R-symmetric supersymmetric standard model

    International Nuclear Information System (INIS)

    Diessner, Philip

    2016-01-01

    The Standard Model (SM) of particle physics gives a comprehensive description of numerous phenomena concerning the fundamental components of nature. Still, open questions and a clouded understanding of the underlying structure remain. Supersymmetry is a well motivated extension that may account for the observed density of dark matter in the universe and solve the hierarchy problem of the SM. The minimal supersymmetric extension of the SM (MSSM) provides solutions to these challenges. Furthermore, it predicts new particles in reach of current experiments. However, the model has its own theoretical challenges and is under fire from measurements provided by the Large Hadron Collider (LHC). Nevertheless, the concept of supersymmetry has an elegance which not only shines in the MSSM. Hence, it is also of interest to examine non-minimal supersymmetric models. They have benefits similar to the MSSM and may solve its shortcomings. R-symmetry is the only global symmetry allowed that does not commutate with supersymmetry and Lorentz symmetry. Thus, extending a supersymmetric model with R-symmetry is a theoretically well motivated endeavor to achieve the complete symmetry content of a field theory. Such a model provides a natural explanation for non-discovery in the early runs of the LHC and leads to further predictions distinct from those of the MSSM. The work described in this thesis contributes to the effort by studying the minimal R-symmetric supersymmetric extension of the SM (MRSSM). Important aspects of its physics and the dependence of observables on the parameter space of the MRSSM are investigated. The discovery of a scalar particle compatible with the Higgs boson of the SM at the LHC was announced in 2012. It is the first and crucial task of this thesis to understand the underlying mechanisms leading to the correct Higgs boson mass prediction in the MRSSM. Then, the relevant regions of parameter space are investigated and it is shown that they are also in agreement

  19. Phenomenological study of the minimal R-symmetric supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Diessner, Philip

    2016-10-20

    The Standard Model (SM) of particle physics gives a comprehensive description of numerous phenomena concerning the fundamental components of nature. Still, open questions and a clouded understanding of the underlying structure remain. Supersymmetry is a well motivated extension that may account for the observed density of dark matter in the universe and solve the hierarchy problem of the SM. The minimal supersymmetric extension of the SM (MSSM) provides solutions to these challenges. Furthermore, it predicts new particles in reach of current experiments. However, the model has its own theoretical challenges and is under fire from measurements provided by the Large Hadron Collider (LHC). Nevertheless, the concept of supersymmetry has an elegance which not only shines in the MSSM. Hence, it is also of interest to examine non-minimal supersymmetric models. They have benefits similar to the MSSM and may solve its shortcomings. R-symmetry is the only global symmetry allowed that does not commutate with supersymmetry and Lorentz symmetry. Thus, extending a supersymmetric model with R-symmetry is a theoretically well motivated endeavor to achieve the complete symmetry content of a field theory. Such a model provides a natural explanation for non-discovery in the early runs of the LHC and leads to further predictions distinct from those of the MSSM. The work described in this thesis contributes to the effort by studying the minimal R-symmetric supersymmetric extension of the SM (MRSSM). Important aspects of its physics and the dependence of observables on the parameter space of the MRSSM are investigated. The discovery of a scalar particle compatible with the Higgs boson of the SM at the LHC was announced in 2012. It is the first and crucial task of this thesis to understand the underlying mechanisms leading to the correct Higgs boson mass prediction in the MRSSM. Then, the relevant regions of parameter space are investigated and it is shown that they are also in agreement

  20. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery.

    Science.gov (United States)

    Vemuri, Anant S; Wu, Jungle Chi-Hsiang; Liu, Kai-Che; Wu, Hurng-Sheng

    2012-12-01

    Surgical procedures have undergone considerable advancement during the last few decades. More recently, the availability of some imaging methods intraoperatively has added a new dimension to minimally invasive techniques. Augmented reality in surgery has been a topic of intense interest and research. Augmented reality involves usage of computer vision algorithms on video from endoscopic cameras or cameras mounted in the operating room to provide the surgeon additional information that he or she otherwise would have to recognize intuitively. One of the techniques combines a virtual preoperative model of the patient with the endoscope camera using natural or artificial landmarks to provide an augmented reality view in the operating room. The authors' approach is to provide this with the least number of changes to the operating room. Software architecture is presented to provide interactive adjustment in the registration of a three-dimensional (3D) model and endoscope video. Augmented reality including adrenalectomy, ureteropelvic junction obstruction, and retrocaval ureter and pancreas was used to perform 12 surgeries. The general feedback from the surgeons has been very positive not only in terms of deciding the positions for inserting points but also in knowing the least change in anatomy. The approach involves providing a deformable 3D model architecture and its application to the operating room. A 3D model with a deformable structure is needed to show the shape change of soft tissue during the surgery. The software architecture to provide interactive adjustment in registration of the 3D model and endoscope video with adjustability of every 3D model is presented.

  1. Big data modeling to predict platelet usage and minimize wastage in a tertiary care system.

    Science.gov (United States)

    Guan, Leying; Tian, Xiaoying; Gombar, Saurabh; Zemek, Allison J; Krishnan, Gomathi; Scott, Robert; Narasimhan, Balasubramanian; Tibshirani, Robert J; Pham, Tho D

    2017-10-24

    Maintaining a robust blood product supply is an essential requirement to guarantee optimal patient care in modern health care systems. However, daily blood product use is difficult to anticipate. Platelet products are the most variable in daily usage, have short shelf lives, and are also the most expensive to produce, test, and store. Due to the combination of absolute need, uncertain daily demand, and short shelf life, platelet products are frequently wasted due to expiration. Our aim is to build and validate a statistical model to forecast future platelet demand and thereby reduce wastage. We have investigated platelet usage patterns at our institution, and specifically interrogated the relationship between platelet usage and aggregated hospital-wide patient data over a recent consecutive 29-mo period. Using a convex statistical formulation, we have found that platelet usage is highly dependent on weekday/weekend pattern, number of patients with various abnormal complete blood count measurements, and location-specific hospital census data. We incorporated these relationships in a mathematical model to guide collection and ordering strategy. This model minimizes waste due to expiration while avoiding shortages; the number of remaining platelet units at the end of any day stays above 10 in our model during the same period. Compared with historical expiration rates during the same period, our model reduces the expiration rate from 10.5 to 3.2%. Extrapolating our results to the ∼2 million units of platelets transfused annually within the United States, if implemented successfully, our model can potentially save ∼80 million dollars in health care costs.

  2. Natural PQ symmetry in the 3-3-1 model with a minimal scalar sector

    International Nuclear Information System (INIS)

    Vega, Bruce Lehmann Sanchez; Garcia, Juan Carlos Montero

    2011-01-01

    Full text: In the framework of a 3-3-1 model with a minimal scalar sector we make a detailed study concerning the implementation of the PQ symmetry in order to solve the strong CP problem. For the original version of the model, with only two scalar triplets, we show that the entire Lagrangian is invariant under a PQ-like symmetry but no axion is produced since an U(1) subgroup remains unbroken. Although in this case the strong CP problem can still be solved, the solution is largely disfavored since three quark states are left massless to all orders in perturbation theory. The addition of a third scalar triplet removes the massless quark states but the resulting axion is visible. In order to become realistic the model must be extended to account for massive quarks and invisible axion. We show that the addition of a scalar singlet together with a ZN discrete gauge symmetry can successfully accomplish these tasks and protect the axion field against quantum gravitational effects. To make sure that the protecting discrete gauge symmetry is anomaly free we use a discrete version of the Green-Schwarz mechanism. (author)

  3. Top quark electric dipole moment in a minimal supersymmetric standard model extension with vectorlike multiplets

    International Nuclear Information System (INIS)

    Ibrahim, Tarek; Nath, Pran

    2010-01-01

    The electric dipole moment (EDM) of the top quark is calculated in a model with a vector like multiplet which mixes with the third generation in an extension of the minimal supersymmetric standard model. Such mixings allow for new CP violating phases. Including these new CP phases, the EDM of the top in this class of models is computed. The top EDM arises from loops involving the exchange of the W, the Z as well as from the exchange involving the charginos, the neutralinos, the gluino, and the vector like multiplet and their superpartners. The analysis of the EDM of the top is more complicated than for the light quarks because the mass of the external fermion, in this case the top quark mass cannot be ignored relative to the masses inside the loops. A numerical analysis is presented and it is shown that the top EDM could be close to 10 -19 ecm consistent with the current limits on the EDM of the electron, the neutron and on atomic EDMs. A top EDM of size 10 -19 ecm could be accessible in collider experiments such as the International Linear Collider.

  4. Minimal see-saw model predicting best fit lepton mixing angles

    International Nuclear Information System (INIS)

    King, Stephen F.

    2013-01-01

    We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (0,1,1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (ν e ,ν μ ,ν τ ) proportional to (1,4,2), with a relative phase η=−2π/5. We show how these patterns of couplings could arise from an A 4 family symmetry model of leptons, together with Z 3 and Z 5 symmetries which fix η=−2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m 2 /m 3 . The model predicts the lepton mixing angles θ 12 ≈34 ∘ ,θ 23 ≈41 ∘ ,θ 13 ≈9.5 ∘ , which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106 ∘

  5. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  6. Diversification of Protein Cage Structure Using Circularly Permuted Subunits.

    Science.gov (United States)

    Azuma, Yusuke; Herger, Michael; Hilvert, Donald

    2018-01-17

    Self-assembling protein cages are useful as nanoscale molecular containers for diverse applications in biotechnology and medicine. To expand the utility of such systems, there is considerable interest in customizing the structures of natural cage-forming proteins and designing new ones. Here we report that a circularly permuted variant of lumazine synthase, a cage-forming enzyme from Aquifex aeolicus (AaLS) affords versatile building blocks for the construction of nanocompartments that can be easily produced, tailored, and diversified. The topologically altered protein, cpAaLS, self-assembles into spherical and tubular cage structures with morphologies that can be controlled by the length of the linker connecting the native termini. Moreover, cpAaLS proteins integrate into wild-type and other engineered AaLS assemblies by coproduction in Escherichia coli to form patchwork cages. This coassembly strategy enables encapsulation of guest proteins in the lumen, modification of the exterior through genetic fusion, and tuning of the size and electrostatics of the compartments. This addition to the family of AaLS cages broadens the scope of this system for further applications and highlights the utility of circular permutation as a potentially general strategy for tailoring the properties of cage-forming proteins.

  7. Multiscale Permutation Entropy Based Rolling Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Jinde Zheng

    2014-01-01

    Full Text Available A new rolling bearing fault diagnosis approach based on multiscale permutation entropy (MPE, Laplacian score (LS, and support vector machines (SVMs is proposed in this paper. Permutation entropy (PE was recently proposed and defined to measure the randomicity and detect dynamical changes of time series. However, for the complexity of mechanical systems, the randomicity and dynamic changes of the vibration signal will exist in different scales. Thus, the definition of MPE is introduced and employed to extract the nonlinear fault characteristics from the bearing vibration signal in different scales. Besides, the SVM is utilized to accomplish the fault feature classification to fulfill diagnostic procedure automatically. Meanwhile, in order to avoid a high dimension of features, the Laplacian score (LS is used to refine the feature vector by ranking the features according to their importance and correlations with the main fault information. Finally, the rolling bearing fault diagnosis method based on MPE, LS, and SVM is proposed and applied to the experimental data. The experimental data analysis results indicate that the proposed method could identify the fault categories effectively.

  8. Model-based minimization algorithm of a supercritical helium loop consumption subject to operational constraints

    Science.gov (United States)

    Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.

    2017-12-01

    Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.

  9. On the gauged Kaehler isometry in minimal supergravity models of inflation

    International Nuclear Information System (INIS)

    Ferrara, S.; Fre, P.; Sorin, A.S.

    2014-01-01

    In this paper we address the question how to discriminate whether the gauged isometry group G Σ of the Kaehler manifold Σ that produces a D-type inflaton potential in a Minimal Supergravity Model is elliptic, hyperbolic or parabolic. We show that the classification of isometries of symmetric cosets can be extended to non symmetric Σ.s if these manifolds satisfy additional mathematical restrictions. The classification criteria established in the mathematical literature are coherent with simple criteria formulated in terms of the asymptotic behavior of the Kaehler potential K(C) = 2 J(C) where the real scalar field C encodes the inflaton field. As a by product of our analysis we show that phenomenologically admissible potentials for the description of inflation and in particular α-attractors are mostly obtained from the gauging of a parabolic isometry, this being, in particular the case of the Starobinsky model. Yet at least one exception exists of an elliptic α-attractor, so that neither type of isometry can be a priori excluded. The requirement of regularity of the manifold Σ poses instead strong constraints on the α-attractors and reduces their space considerably. Curiously there is a unique integrable α-attractor corresponding to a particular value of this parameter. (Copyright copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    Science.gov (United States)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  11. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  12. Revisiting the NEH algorithm- the power of job insertion technique for optimizing the makespan in permutation flow shop scheduling

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2016-04-01

    Full Text Available Permutation flow shop scheduling problems have been an interesting area of research for over six decades. Out of the several parameters, minimization of makespan has been studied much over the years. The problems are widely regarded as NP-Complete if the number of machines is more than three. As the computation time grows exponentially with respect to the problem size, heuristics and meta-heuristics have been proposed by many authors that give reasonably accurate and acceptable results. The NEH algorithm proposed in 1983 is still considered as one of the best simple, constructive heuristics for the minimization of makespan. This paper analyses the powerful job insertion technique used by NEH algorithm and proposes seven new variants, the complexity level remains same. 120 numbers of problem instances proposed by Taillard have been used for the purpose of validating the algorithms. Out of the seven, three produce better results than the original NEH algorithm.

  13. On the Gauged Kahler Isometry in Minimal Supergravity Models of Inflation

    CERN Document Server

    Ferrara, Sergio; Sorin, Alexander S.

    2014-01-01

    In this paper we address the question how to discriminate whether the gauged isometry group G_Sigma of the Kahler manifold Sigma that produces a D-type inflaton potential in a Minimal Supergravity Model is elliptic, hyperbolic or parabolic. We show that the classification of isometries of symmetric cosets can be extended to non symmetric Sigma.s if these manifolds satisfy additional mathematical restrictions. The classification criteria established in the mathematical literature are coherent with simple criteria formulated in terms of the asymptotic behavior of the Kahler potential K(C) = 2 J(C) where the real scalar field C encodes the inflaton field. As a by product of our analysis we show that all phenomenologically admissible potentials for the description of inflation and in particular alpha-attractors are mostly obtained from the gauging of a parabolic isometry. The requirement of regularity of the manifold Sigma poses strong constraints on the alpha-attractors and reduces their space considerably. Curi...

  14. On approximate reasoning and minimal models for the development of robust outdoor vehicle navigation schemes

    Energy Technology Data Exchange (ETDEWEB)

    Pin, F.G.

    1993-11-01

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.

  15. On approximate reasoning and minimal models for the development of robust outdoor vehicle navigation schemes

    International Nuclear Information System (INIS)

    Pin, F.G.

    1993-01-01

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ''minimal model'' for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept

  16. An analytical model to predict and minimize the residual stress of laser cladding process

    Science.gov (United States)

    Tamanna, N.; Crouch, R.; Kabir, I. R.; Naher, S.

    2018-02-01

    Laser cladding is one of the advanced thermal techniques used to repair or modify the surface properties of high-value components such as tools, military and aerospace parts. Unfortunately, tensile residual stresses generate in the thermally treated area of this process. This work focuses on to investigate the key factors for the formation of tensile residual stress and how to minimize it in the clad when using dissimilar substrate and clad materials. To predict the tensile residual stress, a one-dimensional analytical model has been adopted. Four cladding materials (Al2O3, TiC, TiO2, ZrO2) on the H13 tool steel substrate and a range of preheating temperatures of the substrate, from 300 to 1200 K, have been investigated. Thermal strain and Young's modulus are found to be the key factors of formation of tensile residual stresses. Additionally, it is found that using a preheating temperature of the substrate immediately before laser cladding showed the reduction of residual stress.

  17. Minimal flavor violation in the lepton sector of the Randall-Sundrum model

    International Nuclear Information System (INIS)

    Chen Muchun; Yu Haibo

    2009-01-01

    We propose a realization of Minimal Flavor Violation in the lepton sector of the Randall-Sundrum model. With the MFV assumption, the only source of flavor violation are the 5D Yukawa couplings, and the usual two independent sources of flavor violation are related. In the limit of massless neutrinos, the bulk mass matrices and 5D Yukawa matrices are simultaneously diagonalized, and hence the absence of FCNCs. In the case of massive neutrinos, the contributions to FCNCs in the charged lepton sector are highly suppressed, due to the smallness of neutrino masses. In addition, the MFV assumption also allows suppressing one-loop charged current contributions to flavor changing processes by reducing the size of the Yukawa couplings, which is not possible in the generic anarchical case. We found that the first KK mass scale as low as ∼3 TeV can be allowed. In both cases, we present a set of numerical results that give rise to realistic lepton masses and mixing angles. Mild hierarchy in the 5D Yukawa matrix of O(25) in our numerical example is required to be consistent with two large and one small mixing angles. This tuning could be improved by having a more thorough search of the parameter space

  18. The calculation of sparticle and Higgs decays in the minimal and next-to-minimal supersymmetric standard models: SOFTSUSY4.0

    Science.gov (United States)

    Allanach, B. C.; Cridge, T.

    2017-11-01

    We describe a major extension of the SOFTSUSY spectrum calculator to include the calculation of the decays, branching ratios and lifetimes of sparticles into lighter sparticles, covering the next-to-minimal supersymmetric standard model (NMSSM) as well as the minimal supersymmetric standard model (MSSM). This document acts as a manual for the new version of SOFTSUSY, which includes the calculation of sparticle decays. We present a comprehensive collection of explicit expressions used by the program for the various partial widths of the different decay modes in the appendix. Program Files doi:http://dx.doi.org/10.17632/5hhwwmp43g.1 Licensing provisions: GPLv3 Programming language:C++, fortran Nature of problem: Calculating supersymmetric particle partial decay widths in the MSSM or the NMSSM, given the parameters and spectrum which have already been calculated by SOFTSUSY. Solution method: Analytic expressions for tree-level 2 body decays and loop-level decays and one-dimensional numerical integration for 3 body decays. Restrictions: Decays are calculated in the real R -parity conserving MSSM or the real R -parity conserving NMSSM only. No additional charge-parity violation (CPV) relative to the Standard Model (SM). Sfermion mixing has only been accounted for in the third generation of sfermions in the decay calculation. Decays in the MSSM are 2-body and 3-body, whereas decays in the NMSSM are 2-body only. Does the new version supersede the previous version?: Yes. Reasons for the new version: Significantly extended functionality. The decay rates and branching ratios of sparticles are particularly useful for collider searches. Decays calculated in the NMSSM will be a particularly useful check of the other programs in the literature, of which there are few. Summary of revisions: Addition of the calculation of sparticle and Higgs decays. All 2-body and important 3-body tree-level decays, including phenomenologically important loop-level decays (notably, Higgs decays to

  19. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical......, between unparticle physics and Minimal Walking Technicolor. We consider also other N =1 extensions of the Minimal Walking Technicolor model. The new models allow all the standard model matter fields to acquire a mass....

  20. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  1. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  2. On permutation polynomials over finite fields: differences and iterations

    DEFF Research Database (Denmark)

    Anbar Meidl, Nurdagül; Odzak, Almasa; Patel, Vandita

    2017-01-01

    The Carlitz rank of a permutation polynomial f over a finite field Fq is a simple concept that was introduced in the last decade. Classifying permutations over Fq with respect to their Carlitz ranks has some advantages, for instance f with a given Carlitz rank can be approximated by a rational li...

  3. Discriminating chaotic and stochastic dynamics through the permutation spectrum test

    Energy Technology Data Exchange (ETDEWEB)

    Kulp, C. W., E-mail: Kulp@lycoming.edu [Department of Astronomy and Physics, Lycoming College, Williamsport, Pennsylvania 17701 (United States); Zunino, L., E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata—CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina)

    2014-09-01

    In this paper, we propose a new heuristic symbolic tool for unveiling chaotic and stochastic dynamics: the permutation spectrum test. Several numerical examples allow us to confirm the usefulness of the introduced methodology. Indeed, we show that it is robust in situations in which other techniques fail (intermittent chaos, hyperchaotic dynamics, stochastic linear and nonlinear correlated dynamics, and deterministic non-chaotic noise-driven dynamics). We illustrate the applicability and reliability of this pragmatic method by examining real complex time series from diverse scientific fields. Taking into account that the proposed test has the advantages of being conceptually simple and computationally fast, we think that it can be of practical utility as an alternative test for determinism.

  4. Scalar dark matter explanation of the DAMPE data in the minimal left-right symmetric model

    Science.gov (United States)

    Cao, Junjie; Guo, Xiaofei; Shang, Liangliang; Wang, Fei; Wu, Peiwen; Zu, Lei

    2018-03-01

    The left-right symmetric model (LRSM) is an attractive extension of the Standard Model (SM) that can address the origin of parity violation in the SM electroweak interactions, generate tiny neutrino masses, accommodate dark matter (DM) candidates, and provide a natural framework for baryogenesis through leptogenesis. In this work, we utilize the minimal LRSM to study the recently reported DAMPE results of the cosmic e+e- spectrum, which exhibits a tentative peak around 1.4 TeV, while satisfying the current neutrino data. We propose to explain the DAMPE peak with a complex scalar DM χ in two scenarios: (1) χ χ*→H1++H1-→ℓi+ℓi+ℓj-ℓj- , and (2) χ χ*→Hk++Hk-→ℓi+ℓi+ℓj-ℓj- accompanied by χ χ*→H1+H1-→ℓi+νℓiℓj-νℓj , with ℓi,j=e , μ , τ and k =1 , 2. We fit the theoretical prediction of the e+e- spectrum to relevant experimental data to determine the scalar mass spectrum favored by the DAMPE excess. We also consider various constraints from theoretical principles and collider experiments, as well as DM relic density and direct search experiments. We find that there is ample parameter space to interpret the DAMPE data while also passing the constraints. On the other hand, our explanations usually imply the existence of other new physics at an energy scale ranging from 107 to 1011 GeV . Collider tests of our explanations are also discussed.

  5. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  6. Circular Permutation of a Chaperonin Protein: Biophysics and Application to Nanotechnology

    Science.gov (United States)

    Paavola, Chad; Chan, Suzanne; Li, Yi-Fen; McMillan, R. Andrew; Trent, Jonathan

    2004-01-01

    We have designed five circular permutants of a chaperonin protein derived from the hyperthermophilic organism Sulfolobus shibatae. These permuted proteins were expressed in E. coli and are well-folded. Furthermore, all the permutants assemble into 18-mer double rings of the same form as the wild-type protein. We characterized the thermodynamics of folding for each permutant by both guanidine denaturation and differential scanning calorimetry. We also examined the assembly of chaperonin rings into higher order structures that may be used as nanoscale templates. The results show that circular permutation can be used to tune the thermodynamic properties of a protein template as well as facilitating the fusion of peptides, binding proteins or enzymes onto nanostructured templates.

  7. Mathematical model and minimal measurement system for optimal control of heated humidifiers in neonatal ventilation.

    Science.gov (United States)

    Verta, Antonella; Schena, Emiliano; Silvestri, Sergio

    2010-06-01

    The control of thermo-hygrometric conditions of gas delivered in neonatal mechanical ventilation appears to be a particularly difficult task, mainly due to the vast number of parameters to be monitored and the control strategies of heated humidifiers to be adopted. In the present paper, we describe the heat and fluid exchange occurring in a heated humidifier in mathematical terms; we analyze the sensitivity of the relative humidity of outlet gas as a function of thermo-hygrometric and fluid-dynamic parameters of delivered gas; we propose a control strategy that will enable the stability of outlet gas thermo-hygrometric conditions. The mathematical model is represented by a hyper-surface containing the functional relations between the input variables, which must be measured, and the output variables, which have to remain constant. Model sensitivity analysis shows that heated humidifier efficacy and stability of outlet gas thermo-hygrometric conditions are principally influenced by four parameters: liquid surface temperature, gas flow rate, inlet gas temperature and inlet gas relative humidity. The theoretical model has been experimentally validated in typical working conditions of neonatal applications. The control strategy has been implemented by a minimal measurement system composed of three thermometers, a humidity sensor, and a flow rate sensor, and based on the theoretical model. Outlet relative humidity, contained in the range 90+/-4% and 94+/-4%, corresponding with temperature variations in the range 28+/-2 degrees C and 38+/-2 degrees C respectively, has been obtained in the whole flow rate range typical of neonatal ventilation from 1 to 10 L/min. We conclude that in order to obtain the stability of the thermo-hygrometric conditions of the delivered gas mixture: (a) a control strategy with a more complex measurement system must be implemented (i.e. providing more input variables); (b) and the gas may also need to be pre-warmed before entering the humidifying

  8. Effect of minimal enteral feeding on recovery in a methotrexate-induced gastrointestinal mucositis rat model

    NARCIS (Netherlands)

    Kuiken, Nicoline S. S.; Rings, Edmond H. H. M.; Havinga, Rick; Groen, Albert K.; Tissing, Wim J. E.

    Patients suffering from gastrointestinal mucositis often receive parenteral nutrition as nutritional support. However, the absence of enteral nutrition might not be beneficial for the intestine. We aimed to determine the feasibility of minimal enteral feeding (MEF) administration in a methotrexate

  9. One loop corrections to the lightest Higgs mass in the minimal η model with a heavy Z'

    International Nuclear Information System (INIS)

    Comelli, D.

    1992-06-01

    We have evaluated the one loop correction to the bound on the lightest Higgs mass valid in the minimal, E 6 based, supersymmetric η model in the presence of a 'heavy' Z', M z' ≥1 TeV. The dominant contribution from the fermion sfermion sector increases the 108 GeV tree level value by an amount that depends on the top mass in a way that is largely reminescent of minimal SUSY models. For M t ≤150 GeV, Msub(t tilde)=1 TeV, the 'light' Higgs mass is always ≤130 GeV. (orig.)

  10. Improving groundwater management in rural India using simple modeling tools with minimal data requirements

    Science.gov (United States)

    Moysey, S. M.; Oblinger, J. A.; Ravindranath, R.; Guha, C.

    2008-12-01

    shortly after the start of the monsoon and villager water use is small compared to the other fluxes. Groundwater fluxes were accounted for by conceptualizing the contributing areas upstream and downstream of the reservoir as one dimensional flow tubes. This description of the flow system allows for the definition of physically-based parameters making the model useful for investigating WHS infiltration under a variety of management scenarios. To address concerns regarding the uniqueness of the model parameters, 10,000 independent model calibrations were performed using randomly selected starting parameters. Based on this Monte Carlo analysis, it was found that the mean volume of water contributed by the WHS to infiltration over the study period (Sept.-Dec., 2007) was 48.1x103m3 with a 95% confidence interval of 43.7-53.7x103m3. This volume represents 17-21% of the total natural groundwater recharge contributed by the entire watershed, which was determined independently using a surface water balance. Despite the fact that the model is easy to use and requires minimal data, the results obtained provide a powerful quantitative starting point for managing groundwater withdrawals in the dry season.

  11. The minimal SUSY B−L model: simultaneous Wilson lines and string thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Deen, Rehan; Ovrut, Burt A. [Department of Physics, University of Pennsylvania,209 South 33rd Street, Philadelphia, PA 19104-6396 (United States); Purves, Austin [Department of Physics, University of Pennsylvania,209 South 33rd Street, Philadelphia, PA 19104-6396 (United States); Department of Physics, Manhattanville College,2900 Purchase Street, Purchase, NY 10577 (United States)

    2016-07-08

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B−L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two ℤ{sub 3}×ℤ{sub 3} Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional “left-right” sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an “average unification” mass 〈M{sub U}〉. The present analysis is 1) more “natural” than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from 〈M{sub U}〉 to the electroweak scale — being subjected, sequentially, to the requirement of radiative B−L and electroweak symmetry breaking, the present experimental lower bounds on the B−L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ∼125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.

  12. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  13. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Directory of Open Access Journals (Sweden)

    Takashi Shinzato

    Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  14. Extending minimal repair models for repairable systems: A comparison of dynamic and heterogeneous extensions of a nonhomogeneous Poisson process

    International Nuclear Information System (INIS)

    Asfaw, Zeytu Gashaw; Lindqvist, Bo Henry

    2015-01-01

    For many applications of repairable systems, the minimal repair assumption, which leads to nonhomogeneous Poisson processes (NHPP), is not adequate. We review and study two extensions of the NHPP, the dynamic NHPP and the heterogeneous NHPP. Both extensions are motivated by specific aspects of potential applications. It has long been known, however, that the two paradigms are essentially indistinguishable in an analysis of failure data. We investigate the connection between the two approaches for extending NHPP models, both theoretically and numerically in a data example and a simulation study. - Highlights: • Review of dynamic extension of a minimal repair model (LEYP), introduced by Le Gat. • Derivation of likelihood function and comparison to NHPP model with heterogeneity. • Likelihood functions and conditional intensities are similar for the models. • ML estimation is considered for both models using a power law baseline. • A simulation study illustrates and confirms findings of the theoretical study

  15. Solar system tests for realistic f(T) models with non-minimal torsion-matter coupling

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Rui-Hui; Zhai, Xiang-Hua; Li, Xin-Zhou [Shanghai Normal University, Shanghai United Center for Astrophysics (SUCA), Shanghai (China)

    2017-08-15

    In the previous paper, we have constructed two f(T) models with non-minimal torsion-matter coupling extension, which are successful in describing the evolution history of the Universe including the radiation-dominated era, the matter-dominated era, and the present accelerating expansion. Meantime, the significant advantage of these models is that they could avoid the cosmological constant problem of ΛCDM. However, the non-minimal coupling between matter and torsion will affect the tests of the Solar system. In this paper, we study the effects of the Solar system in these models, including the gravitation redshift, geodetic effect and perihelion precession. We find that Model I can pass all three of the Solar system tests. For Model II, the parameter is constrained by the uncertainties of the planets' estimated perihelion precessions. (orig.)

  16. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  17. Fermilab Tevatron and CERN LEP II probes of minimal and string-motivated supergravity models

    International Nuclear Information System (INIS)

    Baer, H.; Gunion, J.F.; Kao, C.; Pois, H.

    1995-01-01

    We explore the ability of the Fermilab Tevatron to probe minimal supersymmetry with high-energy-scale boundary conditions motivated by supersymmetry breaking in the context of minimal and string-motivated supergravity theory. A number of boundary condition possibilities are considered: dilatonlike string boundary conditions applied at the standard GUT unification scale or alternatively at the string scale; and extreme (''no-scale'') minimal supergravity boundary conditions imposed at the GUT scale or string scale. For numerous specific cases within each scenario the sparticle spectra are computed and then fed into ISAGET 7.07 so that explicit signatures can be examined in detail. We find that, for some of the boundary condition choices, large regions of parameter space can be explored via same-sign dilepton and isolated trilepton signals. For other choices, the mass reach of Tevatron collider experiments is much more limited. We also compare the mass reach of Tevatron experiments with the corresponding reach at CERN LEP 200

  18. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    Science.gov (United States)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  19. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  20. Permutations avoiding an increasing number of length-increasing forbidden subsequences

    Directory of Open Access Journals (Sweden)

    Elena Barcucci

    2000-12-01

    Full Text Available A permutation π is said to be τ-avoiding if it does not contain any subsequence having all the same pairwise comparisons as τ. This paper concerns the characterization and enumeration of permutations which avoid a set F j of subsequences increasing both in number and in length at the same time. Let F j be the set of subsequences of the form σ(j+1(j+2, σ being any permutation on {1,...,j}. For j=1 the only subsequence in F 1 is 123 and the 123-avoiding permutations are enumerated by the Catalan numbers; for j=2 the subsequences in F 2 are 1234 2134 and the (1234,2134 avoiding permutations are enumerated by the Schröder numbers; for each other value of j greater than 2 the subsequences in F j are j! and their length is (j+2 the permutations avoiding these j! subsequences are enumerated by a number sequence {a n } such that C n ≤ a n ≤ n!, C n being the n th Catalan number. For each j we determine the generating function of permutations avoiding the subsequences in F j according to the length, to the number of left minima and of non-inversions.

  1. AGT, N-Burge partitions and W{sub N} minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Belavin, Vladimir [I.E. Tamm Department of Theoretical Physics, P N Lebedev Physical Institute,Leninsky Avenue 53, 119991 Moscow (Russian Federation); Department of Quantum Physics, Institute for Information Transmission Problems,Bolshoy Karetny per. 19, 127994 Moscow (Russian Federation); Foda, Omar [Mathematics and Statistics, University of Melbourne,Parkville, VIC 3010 (Australia); Santachiara, Raoul [Laboratoire de Physique Théorique et Modèles Statistiques, Université Paris-Sud,CNRS UMR 8626, Bat. 100, 91405 Orsay cedex (France)

    2015-10-12

    Let B{sub N,n} {sup p,} {sup p{sup ′,}} {sup H} be a conformal block, with n consecutive channels χ{sub ι}, ι=1,⋯,n, in the conformal field theory M{sub N} {sup p,} {sup p{sup ′}} × M{sup H}, where M{sub N} {sup p,} {sup p{sup ′}} is a W{sub N} minimal model, generated by chiral spin-2, ⋯, spin-N currents, and labeled by two co-prime integers p and p{sup ′}, 1

  2. An AUC-based permutation variable importance measure for random forests.

    Science.gov (United States)

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  3. The Los Alamos National Laboratory Chemistry and Metallurgy Research Facility upgrades project - A model for waste minimization

    International Nuclear Information System (INIS)

    Burns, M.L.; Durrer, R.E.; Kennicott, M.A.

    1996-07-01

    The Los Alamos National Laboratory (LANL) Chemistry and Metallurgy Research (CMR) Facility, constructed in 1952, is currently undergoing a major, multi-year construction project. Many of the operations required under this project (i.e., design, demolition, decontamination, construction, and waste management) mimic the processes required of a large scale decontamination and decommissioning (D ampersand D) job and are identical to the requirements of any of several upgrades projects anticipated for LANL and other Department of Energy (DOE) sites. For these reasons the CMR Upgrades Project is seen as an ideal model facility - to test the application, and measure the success of - waste minimization techniques which could be brought to bear on any of the similar projects. The purpose of this paper will be to discuss the past, present, and anticipated waste minimization applications at the facility and will focus on the development and execution of the project's open-quotes Waste Minimization/Pollution Prevention Strategic Plan.close quotes

  4. Multi-response permutation procedure as an alternative to the analysis of variance: an SPSS implementation.

    Science.gov (United States)

    Cai, Li

    2006-02-01

    A permutation test typically requires fewer assumptions than does a comparable parametric counterpart. The multi-response permutation procedure (MRPP) is a class of multivariate permutation tests of group difference useful for the analysis of experimental data. However, psychologists seldom make use of the MRPP in data analysis, in part because the MRPP is not implemented in popular statistical packages that psychologists use. A set of SPSS macros implementing the MRPP test is provided in this article. The use of the macros is illustrated by analyzing example data sets.

  5. All ternary permutation constraint satisfaction problems parameterized above average have kernels with quadratic numbers of variables

    DEFF Research Database (Denmark)

    Gutin, Gregory; Van Iersel, Leo; Mnich, Matthias

    2010-01-01

    A ternary Permutation-CSP is specified by a subset Π of the symmetric group S3. An instance of such a problem consists of a set of variables V and a multiset of constraints, which are ordered triples of distinct variables of V. The objective is to find a linear ordering α of V that maximizes...... the number of triples whose rearrangement (under α) follows a permutation in Π. We prove that all ternary Permutation-CSPs parameterized above average have kernels with quadratic numbers of variables....

  6. A permutation testing framework to compare groups of brain networks.

    Science.gov (United States)

    Simpson, Sean L; Lyday, Robert G; Hayasaka, Satoru; Marsh, Anthony P; Laurienti, Paul J

    2013-01-01

    Brain network analyses have moved to the forefront of neuroimaging research over the last decade. However, methods for statistically comparing groups of networks have lagged behind. These comparisons have great appeal for researchers interested in gaining further insight into complex brain function and how it changes across different mental states and disease conditions. Current comparison approaches generally either rely on a summary metric or on mass-univariate nodal or edge-based comparisons that ignore the inherent topological properties of the network, yielding little power and failing to make network level comparisons. Gleaning deeper insights into normal and abnormal changes in complex brain function demands methods that take advantage of the wealth of data present in an entire brain network. Here we propose a permutation testing framework that allows comparing groups of networks while incorporating topological features inherent in each individual network. We validate our approach using simulated data with known group differences. We then apply the method to functional brain networks derived from fMRI data.

  7. Efficiency and credit ratings: a permutation-information-theory analysis

    International Nuclear Information System (INIS)

    Bariviera, Aurelio Fernandez; Martinez, Lisana B; Zunino, Luciano; Belén Guercio, M; Rosso, Osvaldo A

    2013-01-01

    The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity–entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification. (paper)

  8. Non-minimally coupled quintessence dark energy model with a cubic galileon term: a dynamical system analysis

    Science.gov (United States)

    Bhattacharya, Somnath; Mukherjee, Pradip; Roy, Amit Singha; Saha, Anirban

    2018-03-01

    We consider a scalar field which is generally non-minimally coupled to gravity and has a characteristic cubic Galilean-like term and a generic self-interaction, as a candidate of a Dark Energy model. The system is dynamically analyzed and novel fixed points with perturbative stability are demonstrated. Evolution of the system is numerically studied near a novel fixed point which owes its existence to the Galileon character of the model. It turns out that demanding the stability of this novel fixed point puts a strong restriction on the allowed non-minimal coupling and the choice of the self-interaction. The evolution of the equation of state parameter is studied, which shows that our model predicts an accelerated universe throughout and the phantom limit is only approached closely but never crossed. Our result thus extends the findings of Coley, Dynamical systems and cosmology. Kluwer Academic Publishers, Boston (2013) for more general NMC than linear and quadratic couplings.

  9. Qualitative analysis of cosmological models in Brans-Dicke theory, solutions from non-minimal coupling and viscous universe

    International Nuclear Information System (INIS)

    Romero Filho, C.A.

    1988-01-01

    Using dynamical system theory we investigate homogeneous and isotropic models in Brans-Dicke theory for perfect fluids with general equation of state and arbitrary ω. Phase diagrams are drawn on the Poincare sphere which permits a qualitative analysis of the models. Based on this analysis we construct a method for generating classes of solutions in Brans-Dicke theory. The same technique is used for studying models arising from non-minimal coupling of electromagnetism with gravity. In addition, viscous fluids are considered and non-singular solutions with bulk viscosity are found. (author)

  10. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    Science.gov (United States)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  11. Computing the Jones index of quadratic permutation endomorphisms of O2

    DEFF Research Database (Denmark)

    Szymanski, Wojciech; Conti, Roberto

    2009-01-01

    We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January......We compute the index of the type III1/2  factors arising from endomorphisms of the Cuntz algebra O2  associated to the rank-two permutation matrices. Udgivelsesdato: January...

  12. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    Science.gov (United States)

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  14. Revitalization model of tapioca industry through environmental awareness reinforcement for minimizing water body contamination

    Science.gov (United States)

    Banowati, E.; Indriyanti, D. R.; Juhadi

    2018-03-01

    Tapioca industry in Margoyoso District is a household industry which positively contributes to the growth of the region's economy as it is able to absorb 6,61% of productive age populationor absorb 3,300 workers.On the other hand, the industry impacts contamination of river water in the form of pollutants dissolved materials and particulates into water bodies so that the quality of water decreases even does not work anymore in accordance with the allocation for irrigation or run off of agriculture. The purpose of this research is to: strengthen environmental awareness; calculate the success of the reinforcement action and minimize water body contamination. The research was conducted in two villages of tapioca industry center in Margoyoso district - Pati Regency Administration Area. The determination coefficient of R Square is 0.802 which indicates a successful effort of 80.2%. Regression equation Y = 34.097 + 0.608 X. Industrial entrepreneur's concern increased on 8.45 from total indicator or position to 70.72 so that the gradual effort showed success to minimize water contamination of Suwatu River. The business community of tapioca should build installation of wastewater treatment.

  15. Minimization of energy consumption in HVAC systems with data-driven models and an interior-point method

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Xu, Guanglin; Zhang, Zijun

    2014-01-01

    Highlights: • We study the energy saving of HVAC systems with a data-driven approach. • We conduct an in-depth analysis of the topology of developed Neural Network based HVAC model. • We apply interior-point method to solving a Neural Network based HVAC optimization model. • The uncertain building occupancy is incorporated in the minimization of HVAC energy consumption. • A significant potential of saving HVAC energy is discovered. - Abstract: In this paper, a data-driven approach is applied to minimize energy consumption of a heating, ventilating, and air conditioning (HVAC) system while maintaining the thermal comfort of a building with uncertain occupancy level. The uncertainty of arrival and departure rate of occupants is modeled by the Poisson and uniform distributions, respectively. The internal heating gain is calculated from the stochastic process of the building occupancy. Based on the observed and simulated data, a multilayer perceptron algorithm is employed to model and simulate the HVAC system. The data-driven models accurately predict future performance of the HVAC system based on the control settings and the observed historical information. An optimization model is formulated and solved with the interior-point method. The optimization results are compared with the results produced by the simulation models

  16. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    Science.gov (United States)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  17. A comparison of the probability distribution of observed substorm magnitude with that predicted by a minimal substorm model

    Directory of Open Access Journals (Sweden)

    S. K. Morley

    2007-11-01

    Full Text Available We compare the probability distributions of substorm magnetic bay magnitudes from observations and a minimal substorm model. The observed distribution was derived previously and independently using the IL index from the IMAGE magnetometer network. The model distribution is derived from a synthetic AL index time series created using real solar wind data and a minimal substorm model, which was previously shown to reproduce observed substorm waiting times. There are two free parameters in the model which scale the contributions to AL from the directly-driven DP2 electrojet and loading-unloading DP1 electrojet, respectively. In a limited region of the 2-D parameter space of the model, the probability distribution of modelled substorm bay magnitudes is not significantly different to the observed distribution. The ranges of the two parameters giving acceptable (95% confidence level agreement are consistent with expectations using results from other studies. The approximately linear relationship between the two free parameters over these ranges implies that the substorm magnitude simply scales linearly with the solar wind power input at the time of substorm onset.

  18. The reliability, accuracy and minimal detectable difference of a multi-segment kinematic model of the foot-shoe complex.

    Science.gov (United States)

    Bishop, Chris; Paul, Gunther; Thewlis, Dominic

    2013-04-01

    Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Cost Minimization Model of Gas Transmission Line for Indonesian SIJ Pipeline Network

    Directory of Open Access Journals (Sweden)

    Septoratno Siregar

    2003-05-01

    Full Text Available Optimization of Indonesian SIJ gas pipeline network is being discussed here. Optimum pipe diameters together with the corresponding pressure distribution are obtained from minimization of total cost function consisting of investment and operating costs and subjects to some physical (Panhandle A and Panhandle B equations constraints. Iteration technique based on Generalized Steepest-Descent and fourth order Runge-Kutta method are used here. The resulting diameters from this continuous optimization are then rounded to the closest available discrete sizes. We have also calculated toll fee along each segment and safety factor of the network by determining the pipe wall thickness, using ANSI B31.8 standard. Sensitivity analysis of toll fee for variation of flow rates is shown here. The result will gives the diameter and compressor size and compressor location that feasible to use for the SIJ pipeline project. The Result also indicates that the east route cost relatively less expensive than the west cost.

  20. Nanosilver induces minimal lung toxicity or inflammation in a subacute murine inhalation model

    Directory of Open Access Journals (Sweden)

    O'Shaughnessy Patrick T

    2011-01-01

    Full Text Available Abstract Background There is increasing interest in the environmental and health consequences of silver nanoparticles as the use of this material becomes widespread. Although human exposure to nanosilver is increasing, only a few studies address possible toxic effect of inhaled nanosilver. The objective of this study was to determine whether very small commercially available nanosilver induces pulmonary toxicity in mice following inhalation exposure. Results In this study, mice were exposed sub-acutely by inhalation to well-characterized nanosilver (3.3 mg/m3, 4 hours/day, 10 days, 5 ± 2 nm primary size. Toxicity was assessed by enumeration of total and differential cells, determination of total protein, lactate dehydrogenase activity and inflammatory cytokines in bronchoalveolar lavage fluid. Lungs were evaluated for histopathologic changes and the presence of silver. In contrast to published in vitro studies, minimal inflammatory response or toxicity was found following exposure to nanosilver in our in vivo study. The median retained dose of nanosilver in the lungs measured by inductively coupled plasma - optical emission spectroscopy (ICP-OES was 31 μg/g lung (dry weight immediately after the final exposure, 10 μg/g following exposure and a 3-wk rest period and zero in sham-exposed controls. Dissolution studies showed that nanosilver did not dissolve in solutions mimicking the intracellular or extracellular milieu. Conclusions Mice exposed to nanosilver showed minimal pulmonary inflammation or cytotoxicity following sub-acute exposures. However, longer term exposures with higher lung burdens of nanosilver are needed to ensure that there are no chronic effects and to evaluate possible translocation to other organs.

  1. Minimal duality breaking in the Kallen-Lehman approach to 3D Ising model: A numerical test

    International Nuclear Information System (INIS)

    Astorino, Marco; Canfora, Fabrizio; Martinez, Cristian; Parisi, Luca

    2008-01-01

    A Kallen-Lehman approach to 3D Ising model is analyzed numerically both at low and high temperatures. It is shown that, even assuming a minimal duality breaking, one can fix three parameters of the model to get a very good agreement with the Monte Carlo results at high temperatures. With the same parameters the agreement is satisfactory both at low and near critical temperatures. How to improve the agreement with Monte Carlo results by introducing a more general duality breaking is shortly discussed

  2. Design and Modelling of Sustainable Bioethanol Supply Chain by Minimizing the Total Ecological Footprint in Life Cycle Perspective

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    manners in bioethanol systems, this study developed a model for designing the most sustainable bioethanol supply chain by minimizing the total ecological footprint under some prerequisite constraints including satisfying the goal of the stakeholders', the limitation of resources and energy, the capacity......The purpose of this paper is to develop a model for designing the most sustainable bioethanol supply chain. Taking into consideration of the possibility of multiple-feedstock, multiple transportation modes, multiple alternative technologies, multiple transport patterns and multiple waste disposal...

  3. A Stochastic Integer Programming Model for Minimizing Cost in the Use of Rain Water Collectors for Firefighting

    Directory of Open Access Journals (Sweden)

    Luis A. Rivera-Morales

    2014-01-01

    Full Text Available In this paper we propose a stochastic integer programming optimization model to determine the optimal location and number of rain water collectors (RWCs for forest firefighting. The objective is to minimize expected total cost to control forest fires. The model is tested using a real case and several additional realistic scenarios. The impact on the solution of varying the limit on the number of RWCs, the RWC water capacity, the aircraft capacity, the water demands, and the aircraft operating cost is explored. Some observations are that the objective value improves with larger RWCs and with the use of aircraft with greater capacity.

  4. A Parsimonious Model of the Rabbit Action Potential Elucidates the Minimal Physiological Requirements for Alternans and Spiral Wave Breakup.

    Science.gov (United States)

    Gray, Richard A; Pathmanathan, Pras

    2016-10-01

    Elucidating the underlying mechanisms of fatal cardiac arrhythmias requires a tight integration of electrophysiological experiments, models, and theory. Existing models of transmembrane action potential (AP) are complex (resulting in over parameterization) and varied (leading to dissimilar predictions). Thus, simpler models are needed to elucidate the "minimal physiological requirements" to reproduce significant observable phenomena using as few parameters as possible. Moreover, models have been derived from experimental studies from a variety of species under a range of environmental conditions (for example, all existing rabbit AP models incorporate a formulation of the rapid sodium current, INa, based on 30 year old data from chick embryo cell aggregates). Here we develop a simple "parsimonious" rabbit AP model that is mathematically identifiable (i.e., not over parameterized) by combining a novel Hodgkin-Huxley formulation of INa with a phenomenological model of repolarization similar to the voltage dependent, time-independent rectifying outward potassium current (IK). The model was calibrated using the following experimental data sets measured from the same species (rabbit) under physiological conditions: dynamic current-voltage (I-V) relationships during the AP upstroke; rapid recovery of AP excitability during the relative refractory period; and steady-state INa inactivation via voltage clamp. Simulations reproduced several important "emergent" phenomena including cellular alternans at rates > 250 bpm as observed in rabbit myocytes, reentrant spiral waves as observed on the surface of the rabbit heart, and spiral wave breakup. Model variants were studied which elucidated the minimal requirements for alternans and spiral wave break up, namely the kinetics of INa inactivation and the non-linear rectification of IK.The simplicity of the model, and the fact that its parameters have physiological meaning, make it ideal for engendering generalizable mechanistic

  5. Minimal variance hedging of natural gas derivatives in exponential Lévy models: Theory and empirical performance

    International Nuclear Information System (INIS)

    Ewald, Christian-Oliver; Nawar, Roy; Siu, Tak Kuen

    2013-01-01

    We consider the problem of hedging European options written on natural gas futures, in a market where prices of traded assets exhibit jumps, by trading in the underlying asset. We provide a general expression for the hedging strategy which minimizes the variance of the terminal hedging error, in terms of stochastic integral representations of the payoffs of the options involved. This formula is then applied to compute hedge ratios for common options in various models with jumps, leading to easily computable expressions. As a benchmark we take the standard Black–Scholes and Merton delta hedges. We show that in natural gas option markets minimal variance hedging with underlying consistently outperform the benchmarks by quite a margin. - Highlights: ► We derive hedging strategies for European type options written on natural gas futures. ► These are tested empirically using Henry Hub natural gas futures and options data. ► We find that our hedges systematically outperform classical benchmarks

  6. A new minimal-stress freely-moving rat model for preclinical studies on intranasal administration of CNS drugs.

    Science.gov (United States)

    Stevens, Jasper; Suidgeest, Ernst; van der Graaf, Piet Hein; Danhof, Meindert; de Lange, Elizabeth C M

    2009-08-01

    To develop a new minimal-stress model for intranasal administration in freely moving rats and to evaluate in this model the brain distribution of acetaminophen following intranasal versus intravenous administration. Male Wistar rats received one intranasal cannula, an intra-cerebral microdialysis probe, and two blood cannulas for drug administration and serial blood sampling respectively. To evaluate this novel model, the following experiments were conducted. 1) Evans Blue was administered to verify the selectivity of intranasal exposure. 2) During a 1 min infusion 10, 20, or 40 microl saline was administered intranasally or 250 microl intravenously. Corticosterone plasma concentrations over time were compared as biomarkers for stress. 3) 200 microg of the model drug acetaminophen was given in identical setup and plasma, and brain pharmacokinetics were determined. In 96% of the rats, only the targeted nasal cavity was deeply colored. Corticosterone plasma concentrations were not influenced, neither by route nor volume of administration. Pharmacokinetics of acetaminophen were identical after intravenous and intranasal administration, although the Cmax in microdialysates was reached a little earlier following intravenous administration. A new minimal-stress model for intranasal administration in freely moving rats has been successfully developed and allows direct comparison with intravenous administration.

  7. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    Science.gov (United States)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  8. A Singlet Extension of the Minimal Supersymmetric Standard Model: Towards a More Natural Solution to the Little Hierarchy Problem

    Energy Technology Data Exchange (ETDEWEB)

    de la Puente, Alejandro [Univ. of Notre Dame, IN (United States)

    2012-05-01

    In this work, I present a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), with an explicit μ-term and a supersymmetric mass for the singlet superfield, as a route to alleviating the little hierarchy problem of the Minimal Supersymmetric Standard Model (MSSM). I analyze two limiting cases of the model, characterized by the size of the supersymmetric mass for the singlet superfield. The small and large limits of this mass parameter are studied, and I find that I can generate masses for the lightest neutral Higgs boson up to 140 GeV with top squarks below the TeV scale, all couplings perturbative up to the gauge unification scale, and with no need to fine tune parameters in the scalar potential. This model, which I call the S-MSSM is also embedded in a gauge-mediated supersymmetry breaking scheme. I find that even with a minimal embedding of the S-MSSM into a gauge mediated scheme, the mass for the lightest Higgs boson can easily be above 114 GeV, while keeping the top squarks below the TeV scale. Furthermore, I also study the forward-backward asymmetry in the t¯t system within the framework of the S-MSSM. For this purpose, non-renormalizable couplings between the first and third generation of quarks to scalars are introduced. The two limiting cases of the S-MSSM, characterized by the size of the supersymmetric mass for the singlet superfield is analyzed, and I find that in the region of small singlet supersymmetric mass a large asymmetry can be obtained while being consistent with constraints arising from flavor physics, quark masses and top quark decays.

  9. Widespread occurrence of organelle genome-encoded 5S rRNAs including permuted molecules.

    Science.gov (United States)

    Valach, Matus; Burger, Gertraud; Gray, Michael W; Lang, B Franz

    2014-12-16

    5S Ribosomal RNA (5S rRNA) is a universal component of ribosomes, and the corresponding gene is easily identified in archaeal, bacterial and nuclear genome sequences. However, organelle gene homologs (rrn5) appear to be absent from most mitochondrial and several chloroplast genomes. Here, we re-examine the distribution of organelle rrn5 by building mitochondrion- and plastid-specific covariance models (CMs) with which we screened organelle genome sequences. We not only recover all organelle rrn5 genes annotated in GenBank records, but also identify more than 50 previously unrecognized homologs in mitochondrial genomes of various stramenopiles, red algae, cryptomonads, malawimonads and apusozoans, and surprisingly, in the apicoplast (highly derived plastid) genomes of the coccidian pathogens Toxoplasma gondii and Eimeria tenella. Comparative modeling of RNA secondary structure reveals that mitochondrial 5S rRNAs from brown algae adopt a permuted triskelion shape that has not been seen elsewhere. Expression of the newly predicted rrn5 genes is confirmed experimentally in 10 instances, based on our own and published RNA-Seq data. This study establishes that particularly mitochondrial 5S rRNA has a much broader taxonomic distribution and a much larger structural variability than previously thought. The newly developed CMs will be made available via the Rfam database and the MFannot organelle genome annotator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    Science.gov (United States)

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  11. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Ruochen Liu

    2013-01-01

    Full Text Available The permutation flow shop scheduling problem (PFSSP is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO based memetic algorithm (MPSOMA is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS and individual improvement scheme (IIS. Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA, on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  12. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S. F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  13. Modified Higgs boson phenomenology from gauge or gaugino mediation in the next-to-minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Morrissey, David E.; Pierce, Aaron

    2008-01-01

    In the next-to-minimal supersymmetric standard model (NMSSM), the presence of light pseudoscalars can have a dramatic effect on the decays of the standard model-like Higgs boson. These pseudoscalars are naturally light if supersymmetry breaking preserves an approximate U(1) R symmetry, spontaneously broken when the Higgs bosons take on their expectation values. We investigate two classes of theories that possess such an approximate U(1) R at the mediation scale: modifications of gauge and gaugino mediation. In the models we consider, we find two disjoint classes of phenomenologically allowed parameter regions. One of these regions corresponds to a limit where the singlet of the NMSSM largely decouples. The other can give rise to a standard model-like Higgs boson with dominant branching into light pseudoscalars.

  14. A Comprehensive Mathematical Programming Model for Minimizing Costs in A Multiple-Item Reverse Supply Chain with Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Mahmoudi Hoda

    2014-09-01

    Full Text Available These instructions give you guidelines for preparing papers for IFAC conferences. A reverse supply chain is configured by a sequence of elements forming a continuous process to treat return-products until they are properly recovered or disposed. The activities in a reverse supply chain include collection, cleaning, disassembly, test and sorting, storage, transport, and recovery operations. This paper presents a mathematical programming model with the objective of minimizing the total costs of reverse supply chain including transportation, fixed opening, operation, maintenance and remanufacturing costs of centers. The proposed model considers the design of a multi-layer, multi-product reverse supply chain that consists of returning, disassembly, processing, recycling, remanufacturing, materials and distribution centers. This integer linear programming model is solved by using Lingo 9 software and the results are reported. Finally, a sensitivity analysis of the proposed model is also presented.

  15. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S.F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  16. PDX-MI: Minimal Information for Patient-Derived Tumor Xenograft Models

    NARCIS (Netherlands)

    Meehan, Terrence F.; Conte, Nathalie; Goldstein, Theodore; Inghirami, Giorgio; Murakami, Mark A.; Brabetz, Sebastian; Gu, Zhiping; Wiser, Jeffrey A.; Dunn, Patrick; Begley, Dale A.; Krupke, Debra M.; Bertotti, Andrea; Bruna, Alejandra; Brush, Matthew H.; Byrne, Annette T.; Caldas, Carlos; Christie, Amanda L.; Clark, Dominic A.; Dowst, Heidi; Dry, Jonathan R.; Doroshow, James H.; Duchamp, Olivier; Evrard, Yvonne A.; Ferretti, Stephane; Frese, Kristopher K.; Goodwin, Neal C.; Greenawalt, Danielle; Haendel, Melissa A.; Hermans, Els; Houghton, Peter J.; Jonkers, Jos; Kemper, Kristel; Khor, Tin O.; Lewis, Michael T.; Lloyd, K. C. Kent; Mason, Jeremy; Medico, Enzo; Neuhauser, Steven B.; Olson, James M.; Peeper, Daniel S.; Rueda, Oscar M.; Seong, Je Kyung; Trusolino, Livio; Vinolo, Emilie; Wechsler-Reya, Robert J.; Weinstock, David M.; Welm, Alana; Weroha, S. John; Amant, Frédéric; Pfister, Stefan M.; Kool, Marcel; Parkinson, Helen; Butte, Atul J.; Bult, Carol J.

    2017-01-01

    Patient-derived tumor xenograft (PDX) mouse models have emerged as an important oncology research platform to study tumor evolution, mechanisms of drug response and resistance, and tailoring chemotherapeutic approaches for individual patients. The lack of robust standards for reporting on PDX models

  17. Discrete-State and Continuous Models of Recognition Memory: Testing Core Properties under Minimal Assumptions

    Science.gov (United States)

    Kellen, David; Klauer, Karl Christoph

    2014-01-01

    A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…

  18. Codeswitching and Generative Grammar: A Critique of the MLF Model and Some Remarks on "Modified Minimalism"

    Science.gov (United States)

    MacSwan, Jeff

    2005-01-01

    This article presents an empirical and theoretical critique of the Matrix Language Frame (MLF) model (Myers-Scotton, 1993; Myers-Scotton and Jake, 2001), and includes a response to Jake, Myers-Scotton and Gross's (2002) (JMSG) critique of MacSwan (1999, 2000) and reactions to their revision of the MLF model as a "modified minimalist approach." The…

  19. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  20. UPAYA PENCAPAIAN STANDAR KETUNTASAN BELAJAR MINIMAL (SKBM MELALUI PEMBELAJARAN KOOPERATIF MODEL STUDENT TEAM ACHIEVEMENT DIVISION (STAD

    Directory of Open Access Journals (Sweden)

    Nurcholish Arifin Handoyono

    2015-12-01

    Full Text Available The purpose of this study is to increase the minimum mastery standard in automotive electrical systems repairing matter with subject applying Student Team Achievement Division (STAD model cooperative learning. This study used a classroom action research whitch was conducted in two cyles, each eycle consisted of four phases: planning, implementation, observation, dan reflection. The data were analyzed descriptively. The result proved that teaching learning process using STAD model cooperative learning increased the minimum mastery standard of students. Before applying STAD model cooperative learning, none passed the minimum mastery standard. After applying STAD model cooperative learning, there are advancement in term of the number of students passing the standard 48,48% in the first cycle and 87,88% in the second cycle. The average score reached 71,48 and 81,83 in the first and the second cycle. Therefore, this study concludes that STAD cooperative model increased the minimum mastery standard in automotive electrical systems repairing subject.

  1. Development of isothermal-isobaric replica-permutation method for molecular dynamics and Monte Carlo simulations and its application to reveal temperature and pressure dependence of folded, misfolded, and unfolded states of chignolin

    Science.gov (United States)

    Yamauchi, Masataka; Okumura, Hisashi

    2017-11-01

    We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.

  2. A population-based Bayesian approach to the minimal model of glucose and insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    -posed estimation problem, where the reconstruction most often has been done by non-linear least squares techniques separately for each entity. The minmal model was originally specified for a single individual and does not combine several individuals with the advantage of estimating the metabolic portrait...... to a population-based model. The estimation of the parameters are efficiently implemented in a Bayesian approach where posterior inference is made through the use of Markov chain Monte Carlo techniques. Hereby we obtain a powerful and flexible modelling framework for regularizing the ill-posed estimation problem...

  3. Possible evolution of a bouncing universe in cosmological models with non-minimally coupled scalar fields

    International Nuclear Information System (INIS)

    Pozdeeva, Ekaterina O.; Vernov, Sergey Yu.; Skugoreva, Maria A.; Toporensky, Alexey V.

    2016-01-01

    We explore dynamics of cosmological models with bounce solutions evolving on a spatially flat Friedmann-Lemaître-Robertson-Walker background. We consider cosmological models that contain the Hilbert-Einstein curvature term, the induced gravity term with a negative coupled constant, and even polynomial potentials of the scalar field. Bounce solutions with non-monotonic Hubble parameters have been obtained and analyzed. The case when the scalar field has the conformal coupling and the Higgs-like potential with an opposite sign is studied in detail. In this model the evolution of the Hubble parameter of the bounce solution essentially depends on the sign of the cosmological constant.

  4. Higgs bosons in the next-to-minimal supersymmetric standard model at the LHC

    International Nuclear Information System (INIS)

    Ellwanger, Ulrich

    2011-01-01

    We review possible properties of Higgs bosons in the NMSSM, which allow to discriminate this model from the MSSM: masses of mostly standard-model-like Higgs bosons at or above 140 GeV, or enhanced branching fractions into two photons, or Higgs-to-Higgs decays. In the case of a standard-model-like Higgs boson above 140 GeV, it is necessarily accompanied by a lighter state with a large gauge singlet component. Examples for such scenarios are presented. Available studies on Higgs-to-Higgs decays are discussed according to the various Higgs production modes, light Higgs masses and decay channels. (orig.)

  5. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  6. Multiple travelling-wave solutions in a minimal model for cell motility

    KAUST Repository

    Kimpton, L. S.; Whiteley, J. P.; Waters, S. L.; King, J. R.; Oliver, J. M.

    2012-01-01

    -phase, poroviscous, reactive flow model displays various types of behaviour relevant to cell crawling. We present stability analyses that show that an asymmetric perturbation is required to cause a spatially uniform, stationary strip of cytoplasm to move, which

  7. An orbital-overlap model for minimal work functions of cesiated metal surfaces

    International Nuclear Information System (INIS)

    Chou, Sharon H; Bargatin, Igor; Howe, Roger T; Voss, Johannes; Vojvodic, Aleksandra; Abild-Pedersen, Frank

    2012-01-01

    We introduce a model for the effect of cesium adsorbates on the work function of transition metal surfaces. The model builds on the classical point-dipole equation by adding exponential terms that characterize the degree of orbital overlap between the 6s states of neighboring cesium adsorbates and its effect on the strength and orientation of electric dipoles along the adsorbate-substrate interface. The new model improves upon earlier models in terms of agreement with the work function-coverage curves obtained via first-principles calculations based on density functional theory. All the cesiated metal surfaces have optimal coverages between 0.6 and 0.8 monolayers, in accordance with experimental data. Of all the cesiated metal surfaces that we have considered, tungsten has the lowest minimum work function, also in accordance with experiments.

  8. Vector Control Using Series Iron Loss Model of Induction, Motors and Power Loss Minimization

    OpenAIRE

    Kheldoun Aissa; Khodja Djalal Eddine

    2009-01-01

    The iron loss is a source of detuning in vector controlled induction motor drives if the classical rotor vector controller is used for decoupling. In fact, the field orientation will not be satisfied and the output torque will not truck the reference torque mostly used by Loss Model Controllers (LMCs). In addition, this component of loss, among others, may be excessive if the vector controlled induction motor is driving light loads. In this paper, the series iron loss model ...

  9. Model-based decision making in early clinical development: minimizing the impact of a blood pressure adverse event.

    Science.gov (United States)

    Stroh, Mark; Addy, Carol; Wu, Yunhui; Stoch, S Aubrey; Pourkavoos, Nazaneen; Groff, Michelle; Xu, Yang; Wagner, John; Gottesdiener, Keith; Shadle, Craig; Wang, Hong; Manser, Kimberly; Winchell, Gregory A; Stone, Julie A

    2009-03-01

    We describe how modeling and simulation guided program decisions following a randomized placebo-controlled single-rising oral dose first-in-man trial of compound A where an undesired transient blood pressure (BP) elevation occurred in fasted healthy young adult males. We proposed a lumped-parameter pharmacokinetic-pharmacodynamic (PK/PD) model that captured important aspects of the BP homeostasis mechanism. Four conceptual units characterized the feedback PD model: a sinusoidal BP set point, an effect compartment, a linear effect model, and a system response. To explore approaches for minimizing the BP increase, we coupled the PD model to a modified PK model to guide oral controlled-release (CR) development. The proposed PK/PD model captured the central tendency of the observed data. The simulated BP response obtained with theoretical release rate profiles suggested some amelioration of the peak BP response with CR. This triggered subsequent CR formulation development; we used actual dissolution data from these candidate CR formulations in the PK/PD model to confirm a potential benefit in the peak BP response. Though this paradigm has yet to be tested in the clinic, our model-based approach provided a common rational framework to more fully utilize the limited available information for advancing the program.

  10. Minimal Regge model for meson--baryon scattering: duality, SU(3) and phase-modified absorptive cuts

    International Nuclear Information System (INIS)

    Egli, S.E.

    1975-10-01

    A model is presented which incorporates economically all of the modifications to simple SU(3)-symmetric dual Regge pole theory which are required by existing data on 0 -1 / 2 + → -1 / 2 + processes. The basic assumptions are no-exotics duality, minimally broken SU(3) symmetry, and absorptive Regge cuts phase-modified by the Ringland prescription. First it is described qualitatively how these assumptions suffice for the description of all measured reactions, and then the results of a detailed fit to 1987 data points are presented for 18 different reactions. (auth)

  11. Dynamical analysis for a scalar-tensor model with Gauss-Bonnet and non-minimal couplings

    Energy Technology Data Exchange (ETDEWEB)

    Granda, L.N.; Jimenez, D.F. [Universidad del Valle, Departamento de Fisica, Cali (Colombia)

    2017-10-15

    We study the autonomous system for a scalar-tensor model of dark energy with Gauss-Bonnet and non-minimal couplings. The critical points describe important stable asymptotic scenarios including quintessence, phantom and de Sitter attractor solutions. Two functional forms for the coupling functions and the scalar potential are considered: power-law and exponential functions of the scalar field. For the exponential functions the existence of stable quintessence, phantom or de Sitter solutions, allows for an asymptotic behavior where the effective Newtonian coupling becomes constant. The phantom solutions could be realized without appealing to ghost degrees of freedom. Transient inflationary and radiation-dominated phases can also be described. (orig.)

  12. High-precision predictions for the light CP-even Higgs boson mass of the minimal supersymmetric standard model.

    Science.gov (United States)

    Hahn, T; Heinemeyer, S; Hollik, W; Rzehak, H; Weiglein, G

    2014-04-11

    For the interpretation of the signal discovered in the Higgs searches at the LHC it will be crucial in particular to discriminate between the minimal Higgs sector realized in the standard model (SM) and its most commonly studied extension, the minimal supersymmetric standard model (MSSM). The measured mass value, having already reached the level of a precision observable with an experimental accuracy of about 500 MeV, plays an important role in this context. In the MSSM the mass of the light CP-even Higgs boson, Mh, can directly be predicted from the other parameters of the model. The accuracy of this prediction should at least match the one of the experimental result. The relatively high mass value of about 126 GeV has led to many investigations where the scalar top quarks are in the multi-TeV range. We improve the prediction for Mh in the MSSM by combining the existing fixed-order result, comprising the full one-loop and leading and subleading two-loop corrections, with a resummation of the leading and subleading logarithmic contributions from the scalar top sector to all orders. In this way for the first time a high-precision prediction for the mass of the light CP-even Higgs boson in the MSSM is possible all the way up to the multi-TeV region of the relevant supersymmetric particles. The results are included in the code FEYNHIGGS.

  13. Application of the Oral Minimal Model to Korean Subjects with Normal Glucose Tolerance and Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Min Hyuk Lim

    2016-06-01

    Full Text Available BackgroundThe oral minimal model is a simple, useful tool for the assessment of β-cell function and insulin sensitivity across the spectrum of glucose tolerance, including normal glucose tolerance (NGT, prediabetes, and type 2 diabetes mellitus (T2DM in humans.MethodsPlasma glucose, insulin, and C-peptide levels were measured during a 180-minute, 75-g oral glucose tolerance test in 24 Korean subjects with NGT (n=10 and T2DM (n=14. The parameters in the computational model were estimated, and the indexes for insulin sensitivity and β-cell function were compared between the NGT and T2DM groups.ResultsThe insulin sensitivity index was lower in the T2DM group than the NGT group. The basal index of β-cell responsivity, basal hepatic insulin extraction ratio, and post-glucose challenge hepatic insulin extraction ratio were not different between the NGT and T2DM groups. The dynamic, static, and total β-cell responsivity indexes were significantly lower in the T2DM group than the NGT group. The dynamic, static, and total disposition indexes were also significantly lower in the T2DM group than the NGT group.ConclusionThe oral minimal model can be reproducibly applied to evaluate β-cell function and insulin sensitivity in Koreans.

  14. MINIMIZING THE PREPARATION TIME OF A TUBES MACHINE: EXACT SOLUTION AND HEURISTICS

    Directory of Open Access Journals (Sweden)

    Robinson S.V. Hoto

    Full Text Available ABSTRACT In this paper we optimize the preparation time of a tubes machine. Tubes are hard tubes made by gluing strips of paper that are packed in paper reels, and some of them may be reused between the production of one and another tube. We present a mathematical model for the minimization of changing reels and movements and also implementations for the heuristics Nearest Neighbor, an improvement of a nearest neighbor (Best Nearest Neighbor, refinements of the Best Nearest Neighbor heuristic and a heuristic of permutation called Best Configuration using the IDE (integrated development environment WxDev C++. The results obtained by simulations improve the one used by the company.

  15. Ranking periodic ordering models on the basis of minimizing total inventory cost

    Directory of Open Access Journals (Sweden)

    Mohammadali Keramati

    2015-06-01

    Full Text Available This paper aims to provide proper policies for inventory under uncertain conditions by comparing different inventory policies. To review the efficiency of these algorithms it is necessary to specify the area in which each of them is applied. Therefore, each of the models has been reviewed under different forms of retailing and they are ranked in terms of their expenses. According to the high values of inventories and their impacts on the costs of the companies, the ranking of various models using the simulation annealing algorithm are presented, which indicates that the proposed model of this paper could perform better than other alternative ones. The results also indicate that the suggested algorithm could save from 4 to 29 percent on costs of inventories.

  16. Solvable lattice models with minimal and nonunitary critical behaviour in two dimensions

    International Nuclear Information System (INIS)

    Riggs, H.; Chicago Univ., IL

    1989-01-01

    The exact local height probabilities found by Forrester and Baxter for a series of solvable lattice models in two dimensions are written in terms of nonunitary Virasoro characters and modifications of unitary A 1 (1) affine Lie algebra characters directly related to nonunitary but rational-level A 1 (1) characters. The relation between these results and a rational-level GKO decomposition is given. The off-critical lattice origin of the Virasoro characters and the role of the embedding diagram null vectors in the CTM eigenspace is described. Suggestions for the definition of rational and nonunitary models corresponding to arbitrary G/H cosets are given. (orig.)

  17. ARRA: Reconfiguring Power Systems to Minimize Cascading Failures - Models and Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dobson, Ian [Iowa State University; Hiskens, Ian [Unversity of Michigan; Linderoth, Jeffrey [University of Wisconsin-Madison; Wright, Stephen [University of Wisconsin-Madison

    2013-12-16

    Building on models of electrical power systems, and on powerful mathematical techniques including optimization, model predictive control, and simluation, this project investigated important issues related to the stable operation of power grids. A topic of particular focus was cascading failures of the power grid: simulation, quantification, mitigation, and control. We also analyzed the vulnerability of networks to component failures, and the design of networks that are responsive to and robust to such failures. Numerous other related topics were investigated, including energy hubs and cascading stall of induction machines

  18. The behaviour of random forest permutation-based variable importance measures under predictor correlation.

    Science.gov (United States)

    Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas

    2010-02-27

    Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.

  19. Mass textures and wolfenstein parameters from breaking the flavour permutational symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Mondragon, A; Rivera, T. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico,Mexico D.F. (Mexico); Rodriguez Jauregui, E. [Deutsches Elekronen-Synchrotron, Theory Group, Hamburg (Germany)

    2001-12-01

    We will give an overview of recent progress in the phenomenological study of quark mass matrices, quark flavour mixings and CP-violation with emphasis on the possibility of an underlying discrete, flavour permutational symmetry and its breaking, from which realistic models of mass generation could be built. The quark mixing angles and CP-violating phase, as well as the Wolfenstein parameters are given in terms of four quark mass ratios and only two parameters (Z{sup 1}/2, {phi}) characterizing the symmetry breaking pattern. Excellent agreement with all current experimental data is found. [Spanish] Daremos una visita panoramica del progreso reciente en el estudio fenomenologico de las matrices de masas y de mezclas del sabor de los quarks y la violacion de PC, con enfasis en la posibilidad de que, subyacentes al problema, se halle una simetria discreta, permutacional del sabor y su rompimiento a partir de las cuales se puedan construir modelos realistas de la generacion de las masas. Los angulos de mezcla de los quarks y la fase que viola CP, asi como los parametros de Wolfenstein se dan en terminos de cuatro razones de masas de los quarks y solamente dos parametros (Z{sup 1}/2, {phi}) que caracterizan el patron del rompimiento de la simetria. Los resultados se encuentran en excelente acuerdo con todos los datos experimentales mas recientes.

  20. Minimal hidden sector models for CoGeNT/DAMA events

    International Nuclear Information System (INIS)

    Cline, James M.; Frey, Andrew R.

    2011-01-01

    Motivated by recent attempts to reconcile hints of direct dark matter detection by the CoGeNT and DAMA experiments, we construct simple particle physics models that can accommodate the constraints. We point out challenges for building reasonable models and identify the most promising scenarios for getting isospin violation and inelasticity, as indicated by some phenomenological studies. If inelastic scattering is demanded, we need two new light gauge bosons, one of which kinetically mixes with the standard model hypercharge and has mass <2 GeV, and another which couples to baryon number and has mass 6.8±(0.1/0.2) GeV. Their interference gives the desired amount of isospin violation. The dark matter is nearly Dirac, but with small Majorana masses induced by spontaneous symmetry breaking, so that the gauge boson couplings become exactly off-diagonal in the mass basis, and the small mass splitting needed for inelasticity is simultaneously produced. If only elastic scattering is demanded, then an alternative model, with interference between the kinetically mixed gauge boson and a hidden sector scalar Higgs, is adequate to give the required isospin violation. In both cases, the light kinetically mixed gauge boson is in the range of interest for currently running fixed target experiments.

  1. A computational model for determining the minimal cost expansion alternatives in transmission systems planning

    International Nuclear Information System (INIS)

    Pinto, L.M.V.G.; Pereira, M.V.F.; Nunes, A.

    1989-01-01

    A computational model for determining an economical transmission expansion plan, based in the decomposition techniques is presented. The algorithm was used in the Brazilian South System and was able to find an optimal solution, with a low computational resource. Some expansions of this methodology are been investigated: the probabilistic one and the expansion with financier restriction. (C.G.C.). 4 refs, 7 figs

  2. Minimizing bias in biomass allometry: Model selection and log transformation of data

    Science.gov (United States)

    Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer

    2011-01-01

    Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....

  3. New tools for characterizing swarming systems: A comparison of minimal models

    Science.gov (United States)

    Huepe, Cristián; Aldana, Maximino

    2008-05-01

    We compare three simple models that reproduce qualitatively the emergent swarming behavior of bird flocks, fish schools, and other groups of self-propelled agents by using a new set of diagnosis tools related to the agents’ spatial distribution. Two of these correspond in fact to different implementations of the same model, which had been previously confused in the literature. All models appear to undergo a very similar order-to-disorder phase transition as the noise level is increased if we only compare the standard order parameter, which measures the degree of agent alignment. When considering our novel quantities, however, their properties are clearly distinguished, unveiling previously unreported qualitative characteristics that help determine which model best captures the main features of realistic swarms. Additionally, we analyze the agent clustering in space, finding that the distribution of cluster sizes is typically exponential at high noise, and approaches a power-law as the noise level is reduced. This trend is sometimes reversed at noise levels close to the phase transition, suggesting a non-trivial critical behavior that could be verified experimentally. Finally, we study a bi-stable regime that develops under certain conditions in large systems. By computing the probability distributions of our new quantities, we distinguish the properties of each of the coexisting metastable states. Our study suggests new experimental analyses that could be carried out to characterize real biological swarms.

  4. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  5. Recent Result from E821 Experiment on Muon g-2 and Unconstrained Minimal Supersymemtric Standard Model

    CERN Document Server

    Komine, S; Yamaguchi, M; Komine, Shinji; Moroi, Takeo; Yamaguchi, Masahiro

    2001-01-01

    Recently, the E821 experiment at the Brookhaven National Laboratory announced their latest result of their muon g-2 measurement which is about 2.6-\\sigma away from the standard model prediction. Taking this result seriously, we examine the possibility to explain this discrepancy by the supersymmetric contribution. Our analysis is performed in the framework of the unconstrained supersymmetric standard model which has free seven parameters relevant to muon g-2. We found that, in the case of large \\tan\\beta, sparticle masses are allowed to be large in the region where the SUSY contribution to the muon g-2 is large enough, and hence the conventional SUSY search may fail even at the LHC. On the contrary, to explain the discrepancy in the case of small \\tan\\beta, we found that (i) sleptons and SU(2)_L gauginos should be light, and (ii) negative search for the Higgs boson severely constrains the model in the framework of the mSUGRA and gauge-mediated model.

  6. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  7. A model for flexible tools used in minimally invasive medical virtual environments.

    Science.gov (United States)

    Soler, Francisco; Luzon, M Victoria; Pop, Serban R; Hughes, Chris J; John, Nigel W; Torres, Juan Carlos

    2011-01-01

    Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.

  8. Minimizing the wintertime low bias of Northern Hemisphere carbon monoxide in global model simulations

    Science.gov (United States)

    Stein, Olaf; Schultz, Martin G.; Bouarar, Idir; Clark, Hannah; Huijnen, Vincent; Gaudel, Audrey; George, Maya; Clerbaux, Cathy

    2015-04-01

    Carbon monoxide (CO) is a product of incomplete combustion and is also produced from oxidation of volatile organic compounds (VOC) in the atmosphere. It is of interest as an indirect greenhouse gas and an air pollutant causing health effects and is thus subject to emission restrictions. CO acts as a major sink for the OH radical and as a precursor for tropospheric ozone and affects the oxidizing capacity of the atmosphere as well as regional air quality. Despite the developments in the global modelling of chemistry and of the parameterization of the physical processes, CO concentrations remain underestimated during NH winter by most state-of-the-art chemical transport models. The resulting model bias can in principle originate from either an underestimation of CO sources or an overestimation of its sinks. We address both the role of sources and sinks with a series of MOZART chemistry transport model sensitivity simulations for the year 2008 and compare our results to observational data from ground-based stations, satellite observations, and from MOZAIC tropospheric profile measurements on passenger aircraft. Our base case simulation using the MACCity emission inventory (Granier et al. 2011) underestimates the near-surface Northern Hemispheric CO mixing ratios by more than 20 ppb from December to April with a maximal bias of 40 ppb in January. The bias is strongest for the European region (up to 75 ppb in January). From our sensitivity studies the mismatch between observed and modelled atmospheric CO concentrations can be explained by a combination of the following emission inventory shortcuts: (i) missing anthropogenic wintertime CO emissions from traffic or other combustion processes, (ii) missing anthropogenic VOC emissions, (iii) an exaggerated downward trend in the RCP8.5 scenario underlying the MACCity inventory, (iv) a lack of knowledge about the seasonality of emissions. Deficiencies in the parameterization of the dry deposition velocities can also lead to

  9. COMPUTATIONAL MODELS USED FOR MINIMIZING THE NEGATIVE IMPACT OF ENERGY ON THE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Oprea D.

    2012-04-01

    Full Text Available Optimizing energy system is a problem that is extensively studied for many years by scientists. This problem can be studied from different views and using different computer programs. The work is characterized by one of the following calculation methods used in Europe for modelling, power system optimization. This method shall be based on reduce action of energy system on environment. Computer program used and characterized in this article is GEMIS.

  10. The ring structure of chiral operators for minimal models coupled to 2D gravity

    International Nuclear Information System (INIS)

    Sarmadi, M.H.

    1992-09-01

    The BRST cohomology ring for (p,q) models coupled to gravity is discussed. In addition to the generators of the ghost number zero ring, the existence of a generator of ghost number - 1 and its inverse is proved and used to construct the entire ring. Some comments are made regarding the algebra of the vector fields on the ring and the supersymmetric extension. (author). 13 refs

  11. Impaired brain glymphatic flow in a rodent model of chronic liver disease and minimal hepatic encephalopathy

    OpenAIRE

    Lythgoe, Mark; Hosford, Patrick; Arias, Natalia; Gallego-Duran, Rocio; Hadjihambi, Anna; Jalan, Rajiv; Gourine, Alexander; Habtesion, Abeba; Davies, Nathan; Harrison, Ian

    2017-01-01

    Neuronal function is exquisitely sensitive to alterations in extracellular environment. In patients with hepatic encephalopathy (HE), accumulation of metabolic waste products and noxious substances in the interstitial fluid of the brain may contribute to neuronal dysfunction and cognitive impairment. In a rat model of chronic liver disease, we used an emerging dynamic contrast-enhanced MRI technique to assess the efficacy of the glymphatic system, which facilitates clearance of solutes from t...

  12. Determinants of [13N]ammonia kinetics in hepatic PET experiments: a minimal recirculatory model

    International Nuclear Information System (INIS)

    Weiss, Michael; Roelsgaard, Klaus; Bender, Dirk; Keiding, Susanne

    2002-01-01

    The aim of this study was the development of a modelling approach for the analysis of the systemic kinetics of the tracer nitrogen-13 ammonia administered for dynamic liver scanning. The radioactive half-life of this tracer is 9.8 min, which limits the time span in which data are available in a positron emission tomography experimental setting. A circulatory pharmacokinetic model was applied to the metabolism of ammonia in anaesthetised pigs, which incorporated data from serial measurements of [ 13 N]ammonia and [ 13 N]metabolite activity in arterial and portal venous blood together with blood flow rates through the portal vein and through the hepatic artery obtained over 20 min after intravenous injection of [ 13 N]ammonia. Model analysis showed that up to 20 min after injection the time course of [ 13 N]ammonia concentration in arterial blood is primarily determined by distribution kinetics (steady-state volume of distribution 1,856±531 ml kg -1 ). Simultaneous fitting of arterial ammonia and metabolite blood concentrations allowed for estimation of the hepatic [ 13 N]ammonia clearance (10.25±1.84 ml min -1 kg -1 ), which accounted for the formation of the circulating metabolites. (orig.)

  13. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  14. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  15. Permutation entropy and statistical complexity in characterising low-aspect-ratio reversed-field pinch plasma

    International Nuclear Information System (INIS)

    Onchi, T; Fujisawa, A; Sanpei, A; Himura, H; Masamune, S

    2017-01-01

    Permutation entropy and statistical complexity are measures for complex time series. The Bandt–Pompe methodology evaluates probability distribution using permutation. The method is robust and effective to quantify information of time series data. Statistical complexity is the product of Jensen–Shannon divergence and permutation entropy. These physical parameters are introduced to analyse time series of emission and magnetic fluctuations in low-aspect-ratio reversed-field pinch (RFP) plasma. The observed time-series data aggregates in a region of the plane, the so-called C – H plane, determined by entropy versus complexity. The C – H plane is a representation space used for distinguishing periodic, chaos, stochastic and noisy processes of time series data. The characteristics of the emissions and magnetic fluctuation change under different RFP-plasma conditions. The statistical complexities of soft x-ray emissions and magnetic fluctuations depend on the relationships between reversal and pinch parameters. (paper)

  16. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  17. Minimally invasive lung volume reduction treated with bronchi occlusion emphysema model

    International Nuclear Information System (INIS)

    Zhou Dayong; Shen Liming; Shen Junkang; Jin Yiqi; Chen Lei; Huang Xianchen

    2007-01-01

    Objective: To evaluate the efficacy and feasibility of the coil-and-glue method for the reduction of lung volume in rabbit emphysema model. Methods: Sixteen rabbits of emphysema model were divided into the occlusion group(n=10), in which both anterior bronchi were occluded using the coil-and- glue method, and the control group (n=6). The maximal static pressure of airway (P max ), peak expiratory flow (PEF), end-expiratory volume (EEV) and pressure of oxygen (PO 2 ) were measured at ante- emphysema, post-emphysema, 1 week and 4 week after occlusion respectively. The expectoration (or migration) of coil and collapse of lung were also investigated. Results: P max was (20.0±1.3) and (17.1± 1.4) cm H 2 O (1 cm H 2 O=0.098 kPa) in the occlusion group at ante-emphysema and post-emphysema respectively. P max was (19.2±1.4) cm H 2 O in the occlusion group in the 1 week after the occlusion, while (17.1±1.5)cm H 2 O in the control group (F=6.68, P max was (19.2±1.4) cm H 2 O in the occlusion group, while (16.6±1.2) cm H 2 O in the control group (F=12.10, P max , in the 1 week and 4 week after occlusion were higher than those at post-emphysema (P<0.01, respectively); EEV at post-emphysema was higher than that at ante-emphysema (P<0.01). Conclusion: Coil-and-glue occlusion method for lung volume reduction in rabbit emphysema model can improve the pulmonary function, which can be relatively long lasting. (authors)

  18. Minimal spin-3/2 dark matter in a simple s-channel model

    Energy Technology Data Exchange (ETDEWEB)

    Khojali, Mohammed Omer; Goyal, Ashok; Kumar, Mukesh; Cornell, Alan S. [University of the Witwatersrand, Wits, National Institute for Theoretical Physics, School of Physics and Mandelstam Institute for Theoretical Physics, Johannesburg (South Africa)

    2017-01-15

    We consider a spin-3/2 fermionic dark matter candidate (DM) interacting with Standard Model fermions through a vector mediator in the s-channel. We find that for pure vector couplings almost the entire parameter space of the DM and mediator mass consistent with the observed relic density is ruled out by the direct detection observations through DM-nucleon elastic scattering cross sections. In contrast, for pure axial-vector coupling, the most stringent constraints are obtained from monojet searches at the Large Hadron Collider. (orig.)

  19. NDPA: A generalized efficient parallel in-place N-Dimensional Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Elsayed Ali

    2015-09-01

    Full Text Available N-dimensional transpose/permutation is a very important operation in many large-scale data intensive and scientific applications. These applications include but not limited to oil industry i.e. seismic data processing, nuclear medicine, media production, digital signal processing and business intelligence. This paper proposes an efficient in-place N-dimensional permutation algorithm. The algorithm is based on a novel 3D transpose algorithm that was published recently. The proposed algorithm has been tested on 3D, 4D, 5D, 6D and 7D data sets as a proof of concept. This is the first contribution which is breaking the dimensions’ limitation of the base algorithm. The suggested algorithm exploits the idea of mixing both logical and physical permutations together. In the logical permutation, the address map is transposed for each data unit access. In the physical permutation, actual data elements are swapped. Both permutation levels exploit the fast on-chip memory bandwidth by transferring large amount of data and allowing for fine-grain SIMD (Single Instruction, Multiple Data operations. Thus, the performance is improved as evident from the experimental results section. The algorithm is implemented on NVidia GeForce GTS 250 GPU (Graphics Processing Unit containing 128 cores. The rapid increase in GPUs performance coupled with the recent and continuous improvements in its programmability proved that GPUs are the right choice for computationally demanding tasks. The use of GPUs is the second contribution which reflects how strongly they fit for high performance tasks. The third contribution is improving the proposed algorithm performance to its peak as discussed in the results section.

  20. Fresh Kills leachate treatment and minimization study: Volume 2, Modeling, monitoring and evaluation. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Fillos, J.; Khanbilvardi, R.

    1993-09-01

    The New York City Department of Sanitation is developing a comprehensive landfill leachate management plan for the Fresh Kills landfill, located on the western shore of Staten Island, New York. The 3000-acre facility, owned and operated by the City of New York, has been developed into four distinct mounds that correspond to areas designated as Sections 1/9, 2/8, 3/4 and 6/7. In developing a comprehensive leachate management plan, the estimating leachate flow rates is important in designing appropriate treatment alternatives to reduce the offsite migration that pollutes both surface water and groundwater resources.Estimating the leachate flow rates from Sections 1/9 and 6/7 was given priority using an available model, hydrologic evaluation of landfill performance (HELP), and a new model, flow investigation for landfill leachate (FILL). The field-scale analysis for leachate flow included data collection of the leachate mound-level from piezometers and monitoring wells installed on-site, for six months period. From the leachate mound-head contours and flow-gradients, Leachate flow rates were computed using Darcy`s Law.

  1. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    Science.gov (United States)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  2. Non-standard charged Higgs decay at the LHC in Next-to-Minimal Supersymmetric Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Priyotosh [Dipartimento di Matematica e Fisica “Ennio De Giorgi”, Università del Salento and INFN-Lecce,Via Arnesano, 73100 Lecce (Italy); Huitu, Katri [Department of Physics, and Helsinki Institute of Physics,P.O.B 64 (Gustaf Hällströmin katu 2), FI-00014 University of Helsinki (Finland); Niyogi, Saurabh [The Institute of Mathematical Sciences,CIT Campus, Chennai (India)

    2016-07-04

    We consider next-to-minimal supersymmetric standard model (NMSSM) which has a gauge singlet superfield. In the scale invariant superpotential we do not have the mass terms and the whole Lagrangian has an additional Z{sub 3} symmetry. This model can have light scalar and/or pseudoscalar allowed by the recent data from LHC and the old data from LEP. We investigate the situation where a relatively light charged Higgs can decay to such a singlet-like pseudoscalar and a W{sup ±} boson giving rise to a final state containing τ and/or b-jets and lepton(s). Such decays evade the recent bounds on charged Higgs from the LHC, and according to our PYTHIA-FastJet based simulation can be probed with 10 fb{sup −1} at the LHC center of mass energy of 13 and 14 TeV.

  3. Radiative Corrections to e+e-→ Zh at Future Higgs Factory in the Minimal Dilaton Model

    International Nuclear Information System (INIS)

    Heng Zhao-Xia; Li Dong-Wei; Zhou Hai-Jing

    2015-01-01

    The minimal dilaton model (MDM) extends the Standard Model by one singlet scalar called dilaton and one top quark partner called t'. In this work we investigate the t'-induced radiative correction to the Higgs-strahlung production process e + e − → Zh at future Higgs factory. We first present the analytical calculations in detail and show how to handle the ultraviolet divergence. Then we calculate the correction numerically by considering the constraints from precision electroweak data. We find that, for sinθ L = 0.2 and m t' = 1200 GeV, the correction is 0.26% and 2.1% for √s e + e - - 240 GeV, 1 TeV respectively, and a larger value can be achieved as sin θ L increases. (physics of elementary particles and fields)

  4. The Integration of Production-Distribution on Newspapers Supply Chain for Cost Minimization using Analytic Models: Case Study

    Science.gov (United States)

    Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.

    2018-03-01

    Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.

  5. An enhanced model for minimizing fuel consumption under block-queuing in a drive-through service system

    Energy Technology Data Exchange (ETDEWEB)

    Reilly, C.H.; Berglin, J. [University of Central Florida, Orlando, FL (United States). Dept. of Industrial Engineering and Management Systems

    2004-05-01

    We present a new model for determining the optimal block-size under block-queuing in a simple, single-channel queue at a drive-through service facility. With block-queuing, a queue is partitioned into an active section and a passive section, where drivers are asked to turn off their engines until the active section clears. Our model prescribes a block-size, i.e., a maximum number of vehicles in the active section, which minimizes the expected amount of fuel consumed in the queue. It can assess the effects of the traffic intensity, the service-time variance, and the proportion of compliant drivers in the passive section on the optimal block- size and on fuel consumption in the queue. (author)

  6. All possible lightest supersymmetric particles in proton hexality violating minimal supergravity models and their signals at hadron colliders

    International Nuclear Information System (INIS)

    Grab, Sebastian

    2009-08-01

    The most widely studied supersymmetric scenario is the minimal supersymmetric standard model (MSSM) with more than a hundred free parameters. However for detailed phenomenological studies, the minimal supergravity (mSUGRA) model, a restricted and well-motivated framework for the MSSM, is more convenient. In this model, lepton- and baryon-number violating interactions are suppressed by a discrete symmetry, R-parity or proton-hexality, to keep the proton stable. However, it is sufficient to forbid only lepton- or baryon-number violation. We thus extend mSUGRA models by adding a proton-hexality violating operator at the grand unification scale. This can change the supersymmetric spectrum leading on the one hand to a sneutrino, smuon or squark as the lightest supersymmetric particle (LSP). On the other hand, a wide parameter region is reopened, where the scalar tau (stau) is the LSP. We investigate in detail the conditions leading to non-neutralino LSP scenarios. We take into account the restrictions from neutrino masses, the muon anomalous magnetic moment, b→sγ, and other precision measurements. We furthermore investigate existing restrictions from direct searches at LEP, the Tevatron, and the CERN p anti p collider. It is vital to know the nature of the LSP, since supersymmetric particles normally cascade decay down to the LSP at collider experiments. We present typical LHC signatures for sneutrino LSP scenarios. Promising signatures are high-p T muons and jets, like-sign muon events and detached vertices from long lived taus. We also classify the stau LSP decays and describe their dependence on the mSUGRA parameters. We then exploit our results for resonant single slepton production at the LHC. We find novel signatures with like-sign muon and three- and four-muon final states. Finally, we perform a detailed analysis for single slepton production in association with a single top quark. We show that the signal can be distinguished from the background at the LHC

  7. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  8. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    Science.gov (United States)

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  9. Self-tolerance in a minimal model of the idiotypic network.

    Science.gov (United States)

    Schulz, Robert; Werner, Benjamin; Behn, Ulrich

    2014-01-01

    We consider the problem of self-tolerance in the frame of a minimalistic model of the idiotypic network. A node of this network represents a population of B-lymphocytes of the same idiotype, which is encoded by a bit string. The links of the network connect nodes with (nearly) complementary strings. The population of a node survives if the number of occupied neighbors is not too small and not too large. There is an influx of lymphocytes with random idiotype from the bone marrow. Previous investigations have shown that this system evolves toward highly organized architectures, where the nodes can be classified into groups according to their statistical properties. The building principles of these architectures can be analytically described and the statistical results of simulations agree very well with results of a modular mean-field theory. In this paper, we present simulation results for the case that one or several nodes, playing the role of self, are permanently occupied. These self nodes influence their linked neighbors, the autoreactive clones, but are themselves not affected by idiotypic interactions. We observe that the group structure of the architecture is very similar to the case without self antigen, but organized such that the neighbors of the self are only weakly occupied, thus providing self-tolerance. We also treat this situation in mean-field theory, which give results in good agreement with data from simulation. The model supports the view that autoreactive clones, which naturally occur also in healthy organisms are controlled by anti-idiotypic interactions, and could be helpful to understand network aspects of autoimmune disorders.

  10. Transcutaneous Intraluminal Impedance Measurement for Minimally Invasive Monitoring of Gastric Motility: Validation in Acute Canine Models

    Directory of Open Access Journals (Sweden)

    Michael D. Poscente

    2014-01-01

    Full Text Available Transcutaneous intraluminal impedance measurement (TIIM is a new method to cutaneously measure gastric contractions by assessing the attenuation dynamics of a small oscillating voltage emitted by a battery-powered ingestible capsule retained in the stomach. In the present study, we investigated whether TIIM can reliably assess gastric motility in acute canine models. Methods. Eight mongrel dogs were randomly divided into 2 groups: half received an active TIIM pill and half received an identically sized sham capsule. After 24-hour fasting and transoral administration of the pill (active or sham, two force transducers (FT were sutured onto the antral serosa at laparotomy. After closure, three standard cutaneous electrodes were placed on the abdomen, registering the transluminally emitted voltage. Thirty-minute baseline recordings were followed by pharmacological induction of gastric contractions using neostigmine IV and another 30-minute recording. Normalized one-minute baseline and post-neostigmine gastric motility indices (GMIs were calculated and Pearson correlation coefficients (PCCs between cutaneous and FT GMIs were obtained. Statistically significant GMI PCCs were seen in both baseline and post-neostigmine states. There were no significant GMI PCCs in the sham capsule test. Further chronic animal studies of this novel long-term gastric motility measurement technique are needed before testing it on humans.

  11. Minimal type-I seesaw model with maximally restricted texture zeros

    Science.gov (United States)

    Barreiros, D. M.; Felipe, R. G.; Joaquim, F. R.

    2018-06-01

    In the context of Standard Model (SM) extensions, the seesaw mechanism provides the most natural explanation for the smallness of neutrino masses. In this work we consider the most economical type-I seesaw realization in which two right-handed neutrinos are added to the SM field content. For the sake of predictability, we impose the maximum number of texture zeros in the lepton Yukawa and mass matrices. All possible patterns are analyzed in the light of the most recent neutrino oscillation data, and predictions for leptonic C P violation are presented. We conclude that, in the charged-lepton mass basis, eight different texture combinations are compatible with neutrino data at 1 σ , all of them for an inverted-hierarchical neutrino mass spectrum. Four of these cases predict a C P -violating Dirac phase close to 3 π /2 , which is around the current best-fit value from the global analysis of neutrino oscillation data. If one further reduces the number of free parameters by considering three equal elements in the Dirac neutrino Yukawa coupling matrix, several texture combinations are still compatible with data but only at 3 σ . For all viable textures, the baryon asymmetry of the Universe is computed in the context of thermal leptogenesis, assuming (mildly) hierarchical heavy Majorana neutrino masses M1 ,2. It is shown that the flavored regime is ruled out, while the unflavored one requires M1˜1014 GeV .

  12. A minimal model of the Atlantic Multidecadal Variability: its genesis and predictability

    Energy Technology Data Exchange (ETDEWEB)

    Ou, Hsien-Wang [Lamont-Doherty Earth Observatory of Columbia University, Department of Earth and Environmental Sciences, Palisades, NY (United States)

    2012-02-15

    Through a box model of the subpolar North Atlantic, we examine the genesis and predictability of the Atlantic Multidecadal Variability (AMV), posited as a linear perturbation sustained by the stochastic atmosphere. Postulating a density-dependent thermohaline circulation (THC), the latter would strongly differentiate the thermal and saline damping, and facilitate a negative feedback between the two fields. This negative feedback preferentially suppresses the low-frequency thermal variance to render a broad multidecadal peak bounded by the thermal and saline damping time. We offer this ''differential variance suppression'' as an alternative paradigm of the AMV in place of the ''damped oscillation'' - the latter generally not allowed by the deterministic dynamics and in any event bears no relation to the thermal peak. With the validated dynamics, we then assess the AMV predictability based on the relative entropy - a difference of the forecast and climatological probability distributions, which decays through both error growth and dynamical damping. Since the stochastic forcing is mainly in the surface heat flux, the thermal noise grows rapidly and together with its climatological variance limited by the THC-aided thermal damping, they strongly curtail the thermal predictability. The latter may be prolonged if the initial thermal and saline anomalies are of the same sign, but even rare events of less than 1% chance of occurrence yield a predictable time that is well short of a decade; we contend therefore that the AMV is in effect unpredictable. (orig.)

  13. Mechanisms of self-organization and finite size effects in a minimal agent based model

    International Nuclear Information System (INIS)

    Alfi, V; Cristelli, M; Pietronero, L; Zaccaria, A

    2009-01-01

    We present a detailed analysis of the self-organization phenomenon in which the stylized facts originate from finite size effects with respect to the number of agents considered and disappear in the limit of an infinite population. By introducing the possibility that agents can enter or leave the market depending on the behavior of the price, it is possible to show that the system self-organizes in a regime with a finite number of agents which corresponds to the stylized facts. The mechanism for entering or leaving the market is based on the idea that a too stable market is unappealing for traders, while the presence of price movements attracts agents to enter and speculate on the market. We show that this mechanism is also compatible with the idea that agents are scared by a noisy and risky market at shorter timescales. We also show that the mechanism for self-organization is robust with respect to variations of the exit/entry rules and that the attempt to trigger the system to self-organize in a region without stylized facts leads to an unrealistic dynamics. We study the self-organization in a specific agent based model but we believe that the basic ideas should be of general validity

  14. Minimizing EIT image artefacts from mesh variability in finite element models.

    Science.gov (United States)

    Adler, Andy; Lionheart, William R B

    2011-07-01

    Electrical impedance tomography (EIT) solves an inverse problem to estimate the conductivity distribution within a body from electrical simulation and measurements at the body surface, where the inverse problem is based on a solution of Laplace's equation in the body. Most commonly, a finite element model (FEM) is used, largely because of its ability to describe irregular body shapes. In this paper, we show that simulated variations in the positions of internal nodes within a FEM can result in serious image artefacts in the reconstructed images. Such variations occur when designing FEM meshes to conform to conductivity targets, but the effects may also be seen in other applications of absolute and difference EIT. We explore the hypothesis that these artefacts result from changes in the projection of the anisotropic conductivity tensor onto the FEM system matrix, which introduces anisotropic components into the simulated voltages, which cannot be reconstructed onto an isotropic image, and appear as artefacts. The magnitude of the anisotropic effect is analysed for a small regular FEM, and shown to be proportional to the relative node movement as a fraction of element size. In order to address this problem, we show that it is possible to incorporate a FEM node movement component into the formulation of the inverse problem. These results suggest that it is important to consider artefacts due to FEM mesh geometry in EIT image reconstruction.

  15. Minimally-invasive Laser Ablation Inductively Coupled Plasma Mass Spectrometry analysis of model ancient copper alloys

    Energy Technology Data Exchange (ETDEWEB)

    Walaszek, Damian [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre, Żwirki i Wigury 101, 02-089 Warszawa (Poland); Laboratory for Analytical Chemistry, Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Senn, Marianne; Wichser, Adrian [Laboratory for Analytical Chemistry, Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Faller, Markus [Laboratory for Jointing Technology and Corrosion, Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland); Wagner, Barbara; Bulska, Ewa [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre, Żwirki i Wigury 101, 02-089 Warszawa (Poland); Ulrich, Andrea [Laboratory for Analytical Chemistry, Swiss Federal Laboratories for Materials Science and Technology, Überlandstrasse 129, CH-8600 Dübendorf (Switzerland)

    2014-09-01

    This work describes an evaluation of a strategy for multi-elemental analysis of typical ancient bronzes (copper, lead bronze and tin bronze) by means of laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS).The samples originating from archeological experiments on ancient metal smelting processes using direct reduction in a ‘bloomery’ furnace as well as historical casting techniques were investigated with the use of the previously proposed analytical procedure, including metallurgical observation and preliminary visual estimation of the homogeneity of the samples. The results of LA-ICPMS analysis were compared to the results of bulk composition obtained by X-ray fluorescence spectrometry (XRF) and by inductively coupled plasma mass spectrometry (ICPMS) after acid digestion. These results were coherent for most of the elements confirming the usefulness of the proposed analytical procedure, however the reliability of the quantitative information about the content of the most heterogeneously distributed elements was also discussed in more detail. - Highlights: • The previously proposed procedure was evaluated by analysis of model copper alloys. • The LA-ICPMS results were comparable to the obtained by means of XRF and ICPMS. • LA-ICPMS results indicated the usefulness of the proposed analytical procedure.

  16. Bs0–B-bars0 mixing within minimal flavor-violating two-Higgs-doublet models

    International Nuclear Information System (INIS)

    Chang, Qin; Li, Pei-Fu; Li, Xin-Qiang

    2015-01-01

    In the “Higgs basis” for a generic 2HDM, only one scalar doublet gets a nonzero vacuum expectation value and, under the criterion of minimal flavor violation, the other one is fixed to be either color-singlet or color-octet, which are named as the type-III and type-C models, respectively. In this paper, the charged-Higgs effects of these two models on B s 0 –B -bar s 0 mixing are studied. First of all, we perform a complete one-loop computation of the electro-weak corrections to the amplitudes of B s 0 –B -bar s 0 mixing. Together with the up-to-date experimental measurements, a detailed phenomenological analysis is then performed in the cases of both real and complex Yukawa couplings of charged scalars to quarks. The spaces of model parameters allowed by the current experimental data on B s 0 –B -bar s 0 mixing are obtained and the differences between type-III and type-C models are investigated, which is helpful to distinguish between these two models

  17. The eternal quest for optimal balance between maximizing pleasure and minimizing harm: the compensatory health beliefs model.

    Science.gov (United States)

    Rabia, Marjorie; Knäuper, Bärbel; Miquelon, Paule

    2006-02-01

    Particularly in the health domain, humans thrive to reach an equilibrium between maximizing pleasure and minimizing harm. We propose that a cognitive strategy people employ to reach this equilibrium is the activation of Compensatory Health Beliefs (CHBs). CHBs are beliefs that the negative effects of an unhealthy behaviour can be compensated for, or "neutralized," by engaging in another, healthy behaviour. "I can eat this piece of cake now because I will exercise this evening" is an example of such beliefs. Our theoretical framework aims at explaining why people create CHBs and how they employ CHBs to regulate their health behaviours. The model extends current health behaviour models by explicitly integrating the motivational conflict that emerges from the interplay between affective states (i.e., cravings or desires) and motivation (i.e., health goals). As predicted by the model, previous research has shown that holding CHBs hinder an individual's success at positive health behaviour change, and may explain why many people fail to adhere to behaviour change programs such as dieting or exercising. Moreover, future research using the model and implications for possible interventions are discussed.

  18. Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents

    CERN Document Server

    El-Showk, Sheer; Poland, David; Rychkov, Slava; Simmons-Duffin, David; Vichi, Alessandro

    2014-01-01

    We use the conformal bootstrap to perform a precision study of the operator spectrum of the critical 3d Ising model. We conjecture that the 3d Ising spectrum minimizes the central charge c in the space of unitary solutions to crossing symmetry. Because extremal solutions to crossing symmetry are uniquely determined, we are able to precisely reconstruct the first several Z2-even operator dimensions and their OPE coefficients. We observe that a sharp transition in the operator spectrum occurs at the 3d Ising dimension Delta_sigma=0.518154(15), and find strong numerical evidence that operators decouple from the spectrum as one approaches the 3d Ising point. We compare this behavior to the analogous situation in 2d, where the disappearance of operators can be understood in terms of degenerate Virasoro representations.

  19. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  20. Computational fitness landscape for all gene-order permutations of an RNA virus.

    Directory of Open Access Journals (Sweden)

    Kwang-il Lim

    2009-02-01

    Full Text Available How does the growth of a virus depend on the linear arrangement of genes in its genome? Answering this question may enhance our basic understanding of virus evolution and advance applications of viruses as live attenuated vaccines, gene-therapy vectors, or anti-tumor therapeutics. We used a mathematical model for vesicular stomatitis virus (VSV, a prototype RNA virus that encodes five genes (N-P-M-G-L, to simulate the intracellular growth of all 120 possible gene-order variants. Simulated yields of virus infection varied by 6,000-fold and were found to be most sensitive to gene-order permutations that increased levels of the L gene transcript or reduced levels of the N gene transcript, the lowest and highest expressed genes of the wild-type virus, respectively. Effects of gene order on virus growth also depended upon the host-cell environment, reflecting different resources for protein synthesis and different cell susceptibilities to infection. Moreover, by computationally deleting intergenic attenuations, which define a key mechanism of transcriptional regulation in VSV, the variation in growth associated with the 120 gene-order variants was drastically narrowed from 6,000- to 20-fold, and many variants produced higher progeny yields than wild-type. These results suggest that regulation by intergenic attenuation preceded or co-evolved with the fixation of the wild type gene order in the evolution of VSV. In summary, our models have begun to reveal how gene functions, gene regulation, and genomic organization of viruses interact with their host environments to define processes of viral growth and evolution.

  1. The coupling analysis between stock market indices based on permutation measures

    Science.gov (United States)

    Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung

    2016-04-01

    Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  2. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  3. EPEPT: A web service for enhanced P-value estimation in permutation tests

    Directory of Open Access Journals (Sweden)

    Knijnenburg Theo A

    2011-10-01

    Full Text Available Abstract Background In computational biology, permutation tests have become a widely used tool to assess the statistical significance of an event under investigation. However, the common way of computing the P-value, which expresses the statistical significance, requires a very large number of permutations when small (and thus interesting P-values are to be accurately estimated. This is computationally expensive and often infeasible. Recently, we proposed an alternative estimator, which requires far fewer permutations compared to the standard empirical approach while still reliably estimating small P-values 1. Results The proposed P-value estimator has been enriched with additional functionalities and is made available to the general community through a public website and web service, called EPEPT. This means that the EPEPT routines can be accessed not only via a website, but also programmatically using any programming language that can interact with the web. Examples of web service clients in multiple programming languages can be downloaded. Additionally, EPEPT accepts data of various common experiment types used in computational biology. For these experiment types EPEPT first computes the permutation values and then performs the P-value estimation. Finally, the source code of EPEPT can be downloaded. Conclusions Different types of users, such as biologists, bioinformaticians and software engineers, can use the method in an appropriate and simple way. Availability http://informatics.systemsbiology.net/EPEPT/

  4. A novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism

    Science.gov (United States)

    Ye, Ruisong

    2011-10-01

    This paper proposes a novel chaos-based image encryption scheme with an efficient permutation-diffusion mechanism, in which permuting the positions of image pixels incorporates with changing the gray values of image pixels to confuse the relationship between cipher-image and plain-image. In the permutation process, a generalized Arnold map is utilized to generate one chaotic orbit used to get two index order sequences for the permutation of image pixel positions; in the diffusion process, a generalized Arnold map and a generalized Bernoulli shift map are employed to yield two pseudo-random gray value sequences for a two-way diffusion of gray values. The yielded gray value sequences are not only sensitive to the control parameters and initial conditions of the considered chaotic maps, but also strongly depend on the plain-image processed, therefore the proposed scheme can resist statistical attack, differential attack, known-plaintext as well as chosen-plaintext attack. Experimental results are carried out with detailed analysis to demonstrate that the proposed image encryption scheme possesses large key space to resist brute-force attack as well.

  5. Permutation entropy based time series analysis: Equalities in the input signal can lead to false conclusions

    Energy Technology Data Exchange (ETDEWEB)

    Zunino, Luciano, E-mail: lucianoz@ciop.unlp.edu.ar [Centro de Investigaciones Ópticas (CONICET La Plata – CIC), C.C. 3, 1897 Gonnet (Argentina); Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina); Olivares, Felipe, E-mail: olivaresfe@gmail.com [Instituto de Física, Pontificia Universidad Católica de Valparaíso (PUCV), 23-40025 Valparaíso (Chile); Scholkmann, Felix, E-mail: Felix.Scholkmann@gmail.com [Research Office for Complex Physical and Biological Systems (ROCoS), Mutschellenstr. 179, 8038 Zurich (Switzerland); Biomedical Optics Research Laboratory, Department of Neonatology, University Hospital Zurich, University of Zurich, 8091 Zurich (Switzerland); Rosso, Osvaldo A., E-mail: oarosso@gmail.com [Instituto de Física, Universidade Federal de Alagoas (UFAL), BR 104 Norte km 97, 57072-970, Maceió, Alagoas (Brazil); Instituto Tecnológico de Buenos Aires (ITBA) and CONICET, C1106ACD, Av. Eduardo Madero 399, Ciudad Autónoma de Buenos Aires (Argentina); Complex Systems Group, Facultad de Ingeniería y Ciencias Aplicadas, Universidad de los Andes, Av. Mons. Álvaro del Portillo 12.455, Las Condes, Santiago (Chile)

    2017-06-15

    A symbolic encoding scheme, based on the ordinal relation between the amplitude of neighboring values of a given data sequence, should be implemented before estimating the permutation entropy. Consequently, equalities in the analyzed signal, i.e. repeated equal values, deserve special attention and treatment. In this work, we carefully study the effect that the presence of equalities has on permutation entropy estimated values when these ties are symbolized, as it is commonly done, according to their order of appearance. On the one hand, the analysis of computer-generated time series is initially developed to understand the incidence of repeated values on permutation entropy estimations in controlled scenarios. The presence of temporal correlations is erroneously concluded when true pseudorandom time series with low amplitude resolutions are considered. On the other hand, the analysis of real-world data is included to illustrate how the presence of a significant number of equal values can give rise to false conclusions regarding the underlying temporal structures in practical contexts. - Highlights: • Impact of repeated values in a signal when estimating permutation entropy is studied. • Numerical and experimental tests are included for characterizing this limitation. • Non-negligible temporal correlations can be spuriously concluded by repeated values. • Data digitized with low amplitude resolutions could be especially affected. • Analysis with shuffled realizations can help to overcome this limitation.

  6. Transformative decision rules, permutability, and non-sequential framing of decision problems

    NARCIS (Netherlands)

    Peterson, M.B.

    2004-01-01

    The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A

  7. EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Junjie; Wu, Xin; Huang, Guoqing [Department of Physics and Institute of Astronomy, Nanchang University, Nanchang 330031 (China); Liu, Fuyao, E-mail: xwu@ncu.edu.cn [School of Fundamental Studies, Shanghai University of Engineering Science, Shanghai 201620 (China)

    2017-01-01

    We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step than the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.

  8. Multiple comparisons permutation test for image based data mining in radiotherapy

    NARCIS (Netherlands)

    Chen, Chun; Witte, Marnix; Heemsbergen, Wilma; van Herk, Marcel

    2013-01-01

    : Comparing incidental dose distributions (i.e. images) of patients with different outcomes is a straightforward way to explore dose-response hypotheses in radiotherapy. In this paper, we introduced a permutation test that compares images, such as dose distributions from radiotherapy, while tackling

  9. A method for generating permutation distribution of ranks in a k ...

    African Journals Online (AJOL)

    ... in a combinatorial sense the distribution of the ranks is obtained via its generating function. The formulas are defined recursively to speed up computations using the computer algebra system Mathematica. Key words: Partitions, generating functions, combinatorics, permutation test, exact tests, computer algebra, k-sample, ...

  10. Natural Peccei-Quinn symmetry in the 3-3-1 model with a minimal scalar sector

    International Nuclear Information System (INIS)

    Montero, J. C.; Sanchez-Vega, B. L.

    2011-01-01

    In the framework of a 3-3-1 model with a minimal scalar sector we make a detailed study concerning the implementation of the Peccei-Quinn symmetry in order to solve the strong CP problem. For the original version of the model, with only two scalar triplets, we show that the entire Lagrangian is invariant under a Peccei-Quinn-like symmetry but no axion is produced since a U(1) subgroup remains unbroken. Although in this case the strong CP problem can still be solved, the solution is largely disfavored since three quark states are left massless to all orders in perturbation theory. The addition of a third scalar triplet removes the massless quark states but the resulting axion is visible. In order to become realistic the model must be extended to account for massive quarks and an invisible axion. We show that the addition of a scalar singlet together with a Z N discrete gauge symmetry can successfully accomplish these tasks and protect the axion field against quantum gravitational effects. To make sure that the protecting discrete gauge symmetry is anomaly-free we use a discrete version of the Green-Schwarz mechanism.

  11. Make or buy analysis model based on tolerance allocation to minimize manufacturing cost and fuzzy quality loss

    Science.gov (United States)

    Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.

    2016-02-01

    The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.

  12. Non-minimal flavored S{sub 3} x Z{sub 2} left-right symmetric model

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Izquierdo, Juan Carlos [Tecnologico de Monterrey, Campus Estado de Mexico, Estado de Mexico, Estado de Mexico (Mexico); Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Instituto de Fisica, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)

    2017-08-15

    We propose a non-minimal left-right symmetric model with parity symmetry where the fermion mixings arise as a result of imposing an S{sub 3} x Z{sub 2} flavor symmetry, and an extra Z{sup e}{sub 2} symmetry is considered in the lepton sector. Then the neutrino mass matrix possesses approximately the μ-τ symmetry. The breaking of the μ-τ symmetry induces sizable non-zero θ{sub 13}, and the deviation of θ{sub 23} from 45 {sup circle} is strongly controlled by an ε free parameter and the neutrino masses. So, an analytic study of the CP parities in the neutrino masses is carried out to constrain the ε parameter and the lightest neutrino mass that accommodate the mixing angles. The results are: (a) the normal hierarchy is ruled out for any values of the Majorana phases; (b) for the inverted hierarchy the values of the reactor and atmospheric angles are compatible up to 2, 3 σ C.L.; (c) the degenerate ordering is the most favorable such that the reactor and atmospheric angle are compatible with the experimental data for a large set of values of the free parameters. The model predicts defined regions for the effective neutrino mass, the neutrino mass scale and the sum of the neutrino masses for the favored cases. Therefore, this model may be testable by the future experiments. (orig.)

  13. Conical Intersections, charge localization, and photoisomerization pathway selection in a minimal model of a degenerate monomethine dye

    International Nuclear Information System (INIS)

    Olsen, Seth; McKenzie, Ross H.

    2009-01-01

    We propose a minimal model Hamiltonian for the electronic structure of a monomethine dye, in order to describe the photoisomerization of such dyes. The model describes interactions between three diabatic electronic states, each of which can be associated with a valence bond structure. Monomethine dyes are characterized by a charge-transfer resonance; the indeterminacy of the single-double bonding structure dictated by the resonance is reflected in a duality of photoisomerization pathways corresponding to the different methine bonds. The possible multiplicity of decay channels complicates mechanistic models of the effect of the environment on fluorescent quantum yields, as well as coherent control strategies. We examine the extent and topology of intersection seams between the electronic states of the dye and how they relate to charge localization and selection between different decay pathways. We find that intersections between the S 1 and S 0 surfaces only occur for large twist angles. In contrast, S 2 /S 1 intersections can occur near the Franck-Condon region. When the molecule has left-right symmetry, all intersections are associated with con- or disrotations and never with single bond twists. For asymmetric molecules (i.e., where the bridge couples more strongly to one end) the S 2 and S 1 surfaces bias torsion about different bonds. Charge localization and torsion pathway biasing are correlated. We relate our observations with several recent experimental and theoretical results, which have been obtained for dyes with similar structure.

  14. Probing Minimal 5D Extensions of the Standard Model From LEP to an $e^{+} e^{-}$ Linear Collider

    CERN Document Server

    Mück, A; Rückl, R; Mück, Alexander; Pilaftsis, Apostolos; R\\"uckl, Reinhold

    2004-01-01

    We derive new improved constraints on the compactification scale of minimal 5-dimensional (5D) extensions of the Standard Model (SM) from electroweak and LEP2 data and estimate the reach of an e^+e^- linear collider such as TESLA. Our analysis is performed within the framework of non-universal 5D models, where some of the gauge and Higgs fields propagate in the extra dimension, while all fermions are localized on a S^1/Z_2 orbifold fixed point. Carrying out simultaneous multi-parameter fits of the compactification scale and the SM parameters to the data, we obtain lower bounds on this scale in the range between 4 and 6 TeV. These fits also yield the correlation of the compactification scale with the SM Higgs mass. Investigating the prospects at TESLA, we show that the so-called GigaZ option has the potential to improve these bounds by about a factor 2 in almost all 5D models. Furthermore, at the center of mass energy of 800 GeV and with an integrated luminosity of 10^3 fb^-1, linear collider experiments can p...

  15. All possible lightes supersymmetric particles in proton hexality violating minimal supergravity models and their signals at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Grab, Sebastian

    2009-08-15

    The most widely studied supersymmetric scenario is the minimal supersymmetric standard model (MSSM) with more than a hundred free parameters. However for detailed phenomenological studies, the minimal supergravity (mSUGRA) model, a restricted and well-motivated framework for the MSSM, is more convenient. In this model, lepton- and baryon-number violating interactions are suppressed by a discrete symmetry, R-parity or proton-hexality, to keep the proton stable. However, it is sufficient to forbid only lepton- or baryon-number violation. We thus extend mSUGRA models by adding a proton-hexality violating operator at the grand unification scale. This can change the supersymmetric spectrum leading on the one hand to a sneutrino, smuon or squark as the lightest supersymmetric particle (LSP). On the other hand, a wide parameter region is reopened, where the scalar tau (stau) is the LSP. We investigate in detail the conditions leading to non-neutralino LSP scenarios. We take into account the restrictions from neutrino masses, the muon anomalous magnetic moment, b{yields}s{gamma}, and other precision measurements. We furthermore investigate existing restrictions from direct searches at LEP, the Tevatron, and the CERN p anti p collider. It is vital to know the nature of the LSP, since supersymmetric particles normally cascade decay down to the LSP at collider experiments. We present typical LHC signatures for sneutrino LSP scenarios. Promising signatures are high-p{sub T} muons and jets, like-sign muon events and detached vertices from long lived taus. We also classify the stau LSP decays and describe their dependence on the mSUGRA parameters. We then exploit our results for resonant single slepton production at the LHC. We find novel signatures with like-sign muon and three- and four-muon final states. Finally, we perform a detailed analysis for single slepton production in association with a single top quark. We show that the signal can be distinguished from the background

  16. MATHEMATICAL MODEL AND METHODOLOGY FOR CALCULATION OF MINIMIZATION ON TURNING RADIUS OF TRACTOR UNIT WITH REPLACEABLE SUPPORTING AND MANEUVERING DEVICE

    Directory of Open Access Journals (Sweden)

    P. V. Zeleniy

    2016-01-01

    Full Text Available Smooth plowing with the help of reversible plows has replaced an enclosure method of soil treatment. The method may cause a formation of back ridges or open furrows. Due to this fact turnings of a tractor unit with a minimum radius required in order to ensure shuttle movements each time in the furrow of the preceding operating stroke have become a dominant type of turnings. Non-productive shift time is directly dependent on them and it is on the average 10–12 %, and it is up to 40 % in small contour areas with short run. Large non-productive time is connected with the desire to reduce headland width at field edges, and then a turning is made in several stages while using a complicated maneuvering. Therefore, an increase in efficiency of a plowing unit by means of minimization on its turning radius and execution of turning at one stage in the shortest possible time are considered as relevant objectives. In such a case it is necessary to take into account the fact that potential capabilities of universal tractors having established time-proved designs in respect of reduction of turning radius are practically at the end. So it is expedient to solve the matter at the expense of additional removable devices that ensure transformation of tractor wheel formula at the run end in order to reorient its position. Finally high quality plowing ensured by future-oriented reversible plows will be accompanied not only by output increase per shift, but also by decrease in headland width, their compaction and abrasion due to suspension systems and increase in productivity. The developed design having a novelty which proved by an invention patent and representing an additional supporting and maneuvering device significantly minimizes all the above-mentioned disadvantages and does not require any changes in tractor production design. Investigations have been carried on the following topic: “Minimization of turning radius for universal tractors by transformation

  17. Spin–orbit coupling, minimal model and potential Cooper-pairing from repulsion in BiS2-superconductors

    Science.gov (United States)

    Cobo-Lopez, Sergio; Saeed Bahramy, Mohammad; Arita, Ryotaro; Akbari, Alireza; Eremin, Ilya

    2018-04-01

    We develop the realistic minimal electronic model for recently discovered BiS2 superconductors including the spin–orbit (SO) coupling based on the first-principles band structure calculations. Due to strong SO coupling, characteristic for the Bi-based systems, the tight-binding low-energy model necessarily includes p x , p y , and p z orbitals. We analyze a potential Cooper-pairing instability from purely repulsive interaction for the moderate electronic correlations using the so-called leading angular harmonics approximation. For small and intermediate doping concentrations we find the dominant instabilities to be {d}{x2-{y}2}-wave, and s ±-wave symmetries, respectively. At the same time, in the absence of the sizable spin fluctuations the intra and interband Coulomb repulsions are of the same strength, which yield the strongly anisotropic behavior of the superconducting gaps on the Fermi surface. This agrees with recent angle resolved photoemission spectroscopy findings. In addition, we find that the Fermi surface topology for BiS2 layered systems at large electron doping can resemble the doped iron-based pnictide superconductors with electron and hole Fermi surfaces maintaining sufficient nesting between them. This could provide further boost to increase T c in these systems.

  18. An approach to normal forms of Kuramoto model with distributed delays and the effect of minimal delay

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Ben, E-mail: niubenhit@163.com [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Guo, Yuxiao [Department of Mathematics, Harbin Institute of Technology, Weihai 264209 (China); Jiang, Weihua [Department of Mathematics, Harbin Institute of Technology, Harbin 150001 (China)

    2015-09-25

    Heterogeneous delays with positive lower bound (gap) are taken into consideration in Kuramoto model. On the Ott–Antonsen's manifold, the dynamical transitional behavior from incoherence to coherence is mediated by Hopf bifurcation. We establish a perturbation technique on complex domain, by which universal normal forms, stability and criticality of the Hopf bifurcation are obtained. Theoretically, a hysteresis loop is found near the subcritically bifurcated coherent state. With respect to Gamma distributed delay with fixed mean and variance, we find that the large gap decreases Hopf bifurcation value, induces supercritical bifurcations, avoids the hysteresis loop and significantly increases in the number of coexisting coherent states. The effect of gap is finally interpreted from the viewpoint of excess kurtosis of Gamma distribution. - Highlights: • Heterogeneously delay-coupled Kuramoto model with minimal delay is considered. • Perturbation technique on complex domain is established for bifurcation analysis. • Hysteresis phenomenon is investigated in a theoretical way. • The effect of excess kurtosis of distributed delays is discussed.

  19. Pore-forming activity of pestivirus p7 in a minimal model system supports genus-specific viroporin function.

    Science.gov (United States)

    Largo, Eneko; Gladue, Douglas P; Huarte, Nerea; Borca, Manuel V; Nieva, José L

    2014-01-01

    Viroporins are small integral membrane proteins functional in viral assembly and egress by promoting permeabilization. Blocking of viroporin function therefore constitutes a target for antiviral development. Classical swine fever virus (CSFV) protein p7 has been recently regarded as a class II viroporin. Here, we sought to establish the determinants of the CSFV p7 permeabilizing activity in a minimal model system. Assessment of an overlapping peptide library mapped the porating domain to the C-terminal hydrophobic stretch (residues 39-67). Pore-opening dependence on pH or sensitivity to channel blockers observed for the full protein required the inclusion of a preceding polar sequence (residues 33-38). Effects of lipid composition and structural data further support that the resulting peptide (residues 33-67), may comprise a bona fide surrogate to assay p7 activity in model membranes. Our observations imply that CSFV p7 relies on genus-specific structures-mechanisms to perform its viroporin function. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Effective Lagrangian for s-barbg and s-barbγ vertices in the minimal supergravity model

    International Nuclear Information System (INIS)

    Feng Taifu; Li Xueqian; Wang Guoli

    2002-01-01

    Complete expressions of the s-barbg and s-barbγ vertices are derived in the framework of supersymmetry with minimal flavor violation. As examples, the branching ratios of charmless B decays [B→K+X (no charm)] and exclusive processes B s →γγ are calculated with the minimal supergravity assumptions

  1. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity...

  2. Origin of inflation in CFT driven cosmology. R2-gravity and non-minimally coupled inflaton models

    International Nuclear Information System (INIS)

    Barvinsky, A.O.; Kamenshchik, A.Yu.; Nesterov, D.V.

    2015-01-01

    We present a detailed derivation of the recently suggested new type of hill-top inflation [arXiv:1509.07270] originating from the microcanonical density matrix initial conditions in cosmology driven by conformal field theory (CFT). The cosmological instantons of topology S 1 x S 3 , which set up these initial conditions, have the shape of a garland with multiple periodic oscillations of the scale factor of the spatial S 3 -section. They describe underbarrier oscillations of the inflaton and scale factor in the vicinity of the inflaton potential maximum, which gives a sufficient amount of inflation required by the known CMB data. We build the approximation of two coupled harmonic oscillators for these garland instantons and show that they can generate inflation consistent with the parameters of the CMB primordial power spectrum in the non-minimal Higgs inflation model and in R 2 gravity. In particular, the instanton solutions provide smallness of inflationary slow-roll parameters ε and η < 0 and their relation ε ∝ η 2 characteristic of these two models. We present the mechanism of formation of hill-like inflaton potentials, which is based on logarithmic loop corrections to the asymptotically shift-invariant tree-level potentials of these models in the Einstein frame. We also discuss the role of R 2 -gravity as an indispensable finite renormalization tool in the CFT driven cosmology, which guarantees the nondynamical (ghost free) nature of its scale factor and special properties of its cosmological garland-type instantons. Finally, as a solution to the problem of hierarchy between the Planckian scale and the inflation scale we discuss the concept of a hidden sector of conformal higher spin fields. (orig.)

  3. Origin of inflation in CFT driven cosmology: R{sup 2}-gravity and non-minimally coupled inflaton models

    Energy Technology Data Exchange (ETDEWEB)

    Barvinsky, A. O., E-mail: barvin@td.lpi.ru [Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, 119991, Moscow (Russian Federation); Department of Physics, Tomsk State University, Lenin Ave. 36, 634050, Tomsk (Russian Federation); Department of Physics and Astronomy, Pacific Institute for Theoretical Physics, UBC, 6224 Agricultural Road, V6T1Z1, Vancouver, BC (Canada); Kamenshchik, A. Yu., E-mail: kamenshchik@bo.infn.it [Dipartimento di Fisica e Astronomia, Università di Bologna and INFN, Via Irnerio 46, 40126, Bologna (Italy); L. D. Landau Institute for Theoretical Physics, 119334, Moscow (Russian Federation); Nesterov, D. V., E-mail: nesterov@td.lpi.it [Theory Department, Lebedev Physics Institute, Leninsky Prospect 53, 119991, Moscow (Russian Federation)

    2015-12-11

    We present a detailed derivation of the recently suggested new type of hill-top inflation originating from the microcanonical density matrix initial conditions in cosmology driven by conformal field theory (CFT). The cosmological instantons of topology S{sup 1}×S{sup 3}, which set up these initial conditions, have the shape of a garland with multiple periodic oscillations of the scale factor of the spatial S{sup 3}-section. They describe underbarrier oscillations of the inflaton and scale factor in the vicinity of the inflaton potential maximum, which gives a sufficient amount of inflation required by the known CMB data. We build the approximation of two coupled harmonic oscillators for these garland instantons and show that they can generate inflation consistent with the parameters of the CMB primordial power spectrum in the non-minimal Higgs inflation model and in R{sup 2} gravity. In particular, the instanton solutions provide smallness of inflationary slow-roll parameters ϵ and η<0 and their relation ϵ∼η{sup 2} characteristic of these two models. We present the mechanism of formation of hill-like inflaton potentials, which is based on logarithmic loop corrections to the asymptotically shift-invariant tree-level potentials of these models in the Einstein frame. We also discuss the role of R{sup 2}-gravity as an indispensable finite renormalization tool in the CFT driven cosmology, which guarantees the non-dynamical (ghost free) nature of its scale factor and special properties of its cosmological garland-type instantons. Finally, as a solution to the problem of hierarchy between the Planckian scale and the inflation scale we discuss the concept of a hidden sector of conformal higher spin fields.

  4. Toward a minimal representation of aerosols in climate models: description and evaluation in the Community Atmosphere Model CAM5

    Directory of Open Access Journals (Sweden)

    X. Liu

    2012-05-01

    Full Text Available A modal aerosol module (MAM has been developed for the Community Atmosphere Model version 5 (CAM5, the atmospheric component of the Community Earth System Model version 1 (CESM1. MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically-based manner. Two MAM versions were developed: a more complete version with seven lognormal modes (MAM7, and a version with three lognormal modes (MAM3 for the purpose of long-term (decades to centuries simulations. In this paper a description and evaluation of the aerosol module and its two representations are provided. Sensitivity of the aerosol lifecycle to simplifications in the representation of aerosol is discussed.

    Simulated sulfate and secondary organic aerosol (SOA mass concentrations are remarkably similar between MAM3 and MAM7. Differences in primary organic matter (POM and black carbon (BC concentrations between MAM3 and MAM7 are also small (mostly within 10%. The mineral dust global burden differs by 10% and sea salt burden by 30–40% between MAM3 and MAM7, mainly due to the different size ranges for dust and sea salt modes and different standard deviations of the log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and temporal variations of aerosol mass and number concentrations, size distributions, and aerosol optical properties. However, there are noticeable biases; e.g., simulated BC concentrations are significantly lower than measurements in the Arctic. There is a low bias in modeled aerosol optical depth on the global scale, especially in the developing countries. These biases in aerosol simulations clearly indicate the need for improvements of aerosol processes (e.g., emission fluxes of anthropogenic aerosols and

  5. Toward a Minimal Representation of Aerosols in Climate Models: Description and Evaluation in the Community Atmosphere Model CAM5

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaohong; Easter, Richard C.; Ghan, Steven J.; Zaveri, Rahul A.; Rasch, Philip J.; Shi, Xiangjun; Lamarque, J.-F.; Gettelman, A.; Morrison, H.; Vitt, Francis; Conley, Andrew; Park, S.; Neale, Richard; Hannay, Cecile; Ekman, A. M.; Hess, Peter; Mahowald, N.; Collins, William D.; Iacono, Michael J.; Bretherton, Christopher S.; Flanner, M. G.; Mitchell, David

    2012-05-21

    A modal aerosol module (MAM) has been developed for the Community Atmosphere Model version 5 (CAM5), the atmospheric component of the Community Earth System Model version 1 (CESM1). MAM is capable of simulating the aerosol size distribution and both internal and external mixing between aerosol components, treating numerous complicated aerosol processes and aerosol physical, chemical and optical properties in a physically based manner. Two MAM versions were developed: a more complete version with seven-lognormal modes (MAM7), and a three-lognormal mode version (MAM3) for the purpose of long-term (decades to centuries) simulations. Major approximations in MAM3 include assuming immediate mixing of primary organic matter (POM) and black carbon (BC) with other aerosol components, merging of the MAM7 fine dust and fine sea salt modes into the accumulation mode, merging of the MAM7 coarse dust and coarse sea salt modes into the single coarse mode, and neglecting the explicit treatment of ammonia and ammonium cycles. Simulated sulfate and secondary organic aerosol (SOA) mass concentrations are remarkably similar between MAM3 and MAM7 as most ({approx}90%) of these aerosol species are in the accumulation mode. Differences of POM and BC concentrations between MAM3 and MAM7 are also small (mostly within 10%) because of the assumed hygroscopic nature of POM, so that freshly emitted POM and BC are wet-removed before mixing internally with soluble aerosol species. Sensitivity tests with the POM assumed to be hydrophobic and with slower aging process increase the POM and BC concentrations, especially at high latitudes (by several times). The mineral dust global burden differs by 10% and sea salt burden by 30-40% between MAM3 and MAM7 mainly due to the different size ranges for dust and sea salt modes and different standard deviations of log-normal size distribution for sea salt modes between MAM3 and MAM7. The model is able to qualitatively capture the observed geographical and

  6. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  7. An innovative intermittent hypoxia model for cell cultures allowing fast Po2 oscillations with minimal gas consumption.

    Science.gov (United States)

    Minoves, Mélanie; Morand, Jessica; Perriot, Frédéric; Chatard, Morgane; Gonthier, Brigitte; Lemarié, Emeline; Menut, Jean-Baptiste; Polak, Jan; Pépin, Jean-Louis; Godin-Ribuot, Diane; Briançon-Marjollet, Anne

    2017-10-01

    Performing hypoxia-reoxygenation cycles in cell culture with a cycle duration accurately reflecting what occurs in obstructive sleep apnea (OSA) patients is a difficult but crucial technical challenge. Our goal was to develop a novel device to expose multiple cell culture dishes to intermittent hypoxia (IH) cycles relevant to OSA with limited gas consumption. With gas flows as low as 200 ml/min, our combination of plate holders with gas-permeable cultureware generates rapid normoxia-hypoxia cycles. Cycles alternating 1 min at 20% O 2 followed by 1 min at 2% O 2 resulted in Po 2 values ranging from 124 to 44 mmHg. Extending hypoxic and normoxic phases to 10 min allowed Po 2 variations from 120 to 25 mmHg. The volume of culture medium or the presence of cells only modestly affected the Po 2 variations. In contrast, the nadir of the hypoxia phase increased when measured at different heights above the membrane. We validated the physiological relevance of this model by showing that hypoxia inducible factor-1α expression was significantly increased by IH exposure in human aortic endothelial cells, murine breast carcinoma (4T1) cells as well as in a blood-brain barrier model (2.5-, 1.5-, and 6-fold increases, respectively). In conclusion, we have established a new device to perform rapid intermittent hypoxia cycles in cell cultures, with minimal gas consumption and the possibility to expose several culture dishes simultaneously. This device will allow functional studies of the consequences of IH and deciphering of the molecular biology of IH at the cellular level using oxygen cycles that are clinically relevant to OSA. Copyright © 2017 the American Physiological Society.

  8. Analysis of the production of Higgs boson pairs at the one-loop level in the minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Philippov, Yu. P.

    2009-01-01

    Within the minimal supersymmetric standard model, the amplitudes and total cross sections for the processes e + e - → hh, e + e - → hH, e + e - → HH, and e + e - → AA are calculated in the first order of perturbation theory with allowance for a complete set of one-loop diagrams in the m e → 0 approximation. Analytic expressions are obtained for the quantities under consideration; numerical results are presented in a graphical form. It is shown that the cross section for the process e + e - → hh is larger than those for the other processes (and is on the same order of magnitude as the cross section for the corresponding processes in the Standard Model). In the case of the collision energy equal to √s = 500 GeV, an integrated luminosity in the region ∫ L ≥ 500 fb -1 , and a longitudinal polarization of the e + e- beams used, 520, 320, and 300 production events are possible in the processes e + e - → hh (at M h = 115 GeV), e + e - → HH, and e + e - → AA (at M H,A = 120 GeV), respectively. Even at M H,A ∼ 500 GeV and √s = 1.5 TeV, not less than 200 events for each of the processes can be accumulated. The cross section for the process e + e - → hH is small (about 10 -2 fb), which complicates the detection of the sought signal significantly.

  9. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  10. Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows

    DEFF Research Database (Denmark)

    Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl

    2015-01-01

    Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted...... by the operation resembling ShiftRows has never been studied in depth. Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear...... normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry...

  11. A permutation information theory tour through different interest rate maturities: the Libor case.

    Science.gov (United States)

    Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A

    2015-12-13

    This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).

  12. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  13. Rolling Bearing Fault Diagnosis Based on ELCD Permutation Entropy and RVM

    Directory of Open Access Journals (Sweden)

    Jiang Xingmeng

    2016-01-01

    Full Text Available Aiming at the nonstationary characteristic of a gear fault vibration signal, a recognition method based on permutation entropy of ensemble local characteristic-scale decomposition (ELCD and relevance vector machine (RVM is proposed. First, the vibration signal was decomposed by ELCD; then a series of intrinsic scale components (ISCs were obtained. Second, according to the kurtosis of ISCs, principal ISCs were selected and then the permutation entropy of principal ISCs was calculated and they were combined into a feature vector. Finally, the feature vectors were input in RVM classifier to train and test and identify the type of rolling bearing faults. Experimental results show that this method can effectively diagnose four kinds of working condition, and the effect is better than local characteristic-scale decomposition (LCD method.

  14. Index of French nuclear literature: IBM 360 programmes for preparing the permuted index of French titles

    International Nuclear Information System (INIS)

    Chonez, Nicole

    1968-12-01

    This report contains the assembly list, the flow chart and some comments about each of the IBM 360 assembler language programmes used for preparing one of the subject indexes contained in the bibliographical bulletin entitled: 'Index de la Litterature nucleaire francaise'; this bulletin has been produced by the French C.E.A. since 1968. Only the processing phases from the magnetic tape file of the bibliographical references, assumed correct, to the printing out of the permuted index obtained with the French titles of the documents on the tape are considered here. This permuted index has the peculiarity of automatically regrouping synonyms and certain grammatical variations of the words. (author) [fr

  15. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  16. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  17. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  18. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.; Spoto, F.; Scollo, Giuseppe; Nijholt, Antinus

    2003-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq 1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with

  19. Generating all permutations by context-free grammars in Chomsky normal form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2006-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq1$, with

  20. Generating All Permutations by Context-Free Grammars in Chomsky Normal Form

    NARCIS (Netherlands)

    Asveld, P.R.J.

    2004-01-01

    Let $L_n$ be the finite language of all $n!$ strings that are permutations of $n$ different symbols ($n\\geq 1$). We consider context-free grammars $G_n$ in Chomsky normal form that generate $L_n$. In particular we study a few families $\\{G_n\\}_{n\\geq1}$, satisfying $L(G_n)=L_n$ for $n\\geq 1$, with