WorldWideScience

Sample records for deterministic annealing variant

  1. Enhanced piecewise regression based on deterministic annealing

    Institute of Scientific and Technical Information of China (English)

    ZHANG JiangShe; YANG YuQian; CHEN XiaoWen; ZHOU ChengHu

    2008-01-01

    Regression is one of the important problems in statistical learning theory. This paper proves the global convergence of the piecewise regression algorithm based on deterministic annealing and continuity of global minimum of free energy w.r.t temperature, and derives a new simplified formula to compute the initial critical temperature. A new enhanced piecewise regression algorithm by using "migration of prototypes" is proposed to eliminate "empty cell" in the annealing process. Numerical experiments on several benchmark datasets show that the new algo-rithm can remove redundancy and improve generalization of the piecewise regres-sion model.

  2. A Deterministic Annealing Approach to Clustering AIRS Data

    Science.gov (United States)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  3. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    Directory of Open Access Journals (Sweden)

    Ortega Antonio

    2005-01-01

    Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  4. A deterministic annealing algorithm for a combinatorial optimization problem using replicator equations

    Science.gov (United States)

    Tsuchiya, Kazuo; Nishiyama, Takehiro; Tsujita, Katsuyoshi

    2001-02-01

    We have proposed an optimization method for a combinatorial optimization problem using replicator equations. To improve the solution further, a deterministic annealing algorithm may be applied. During the annealing process, bifurcations of equilibrium solutions will occur and affect the performance of the deterministic annealing algorithm. In this paper, the bifurcation structure of the proposed model is analyzed in detail. It is shown that only pitchfork bifurcations occur in the annealing process, and the solution obtained by the annealing is the branch uniquely connected with the uniform solution. It is also shown experimentally that in many cases, this solution corresponds to a good approximate solution of the optimization problem. Based on the results, a deterministic annealing algorithm is proposed and applied to the quadratic assignment problem to verify its performance.

  5. A deterministic annealing algorithm for the pre- and end-haulage of intermodal container terminals

    OpenAIRE

    CARIS, An; Janssens, Gerrit

    2010-01-01

    The drayage of containers in the service area of an intermodal barge terminal is modelled as a full truckload pickup and delivery problem with time windows (FTPDPTW). Initial solutions are generated with an insertion heuristic and improved with three local search operators. In a post-optimization phase the three search operators are integrated in a deterministic annealing (DA) framework. The mechanism of the heuristic procedures is demonstrated with a numerical example. A sensitivity analysis...

  6. Deterministic Annealing Optimization for Witsenhausen's and Related Decentralized Stochastic Control Problems

    OpenAIRE

    Mehmetoglu, Mustafa; Akyol, Emrah; Rose, Kenneth

    2016-01-01

    This note studies the global optimization of controller mappings in discrete-time stochastic control problems including Witsenhausen's celebrated 1968 counter-example. We propose a generally applicable non-convex numerical optimization method based on the concept of deterministic annealing-which is derived from information-theoretic principles and was successfully employed in several problems including vector quantization, classification, and regression. We present comparative numerical resul...

  7. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  8. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing.

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm.

  9. Analysis of Trivium by a Simulated Annealing variant

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2010-01-01

    . A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...

  10. Comparing of the Deterministic Simulated Annealing Methods for Quadratic Assignment Problem

    Directory of Open Access Journals (Sweden)

    Mehmet Güray ÜNSAL

    2013-08-01

    Full Text Available In this study, Threshold accepting and Record to record travel methods belonging to Simulated Annealing that is meta-heuristic method by applying Quadratic Assignment Problem are statistically analyzed whether they have a significant difference with regard to the values of these two methods target functions and CPU time. Between the two algorithms, no significant differences are found in terms of CPU time and the values of these two methods target functions. Consequently, on the base of Quadratic Assignment Problem, the two algorithms are compared in the study have the same performance in respect to CPU time and the target functions values

  11. Bayesian system identification of a nonlinear dynamical system using a novel variant of Simulated Annealing

    Science.gov (United States)

    Green, P. L.

    2015-02-01

    This work details the Bayesian identification of a nonlinear dynamical system using a novel MCMC algorithm: 'Data Annealing'. Data Annealing is similar to Simulated Annealing in that it allows the Markov chain to easily clear 'local traps' in the target distribution. To achieve this, training data is fed into the likelihood such that its influence over the posterior is introduced gradually - this allows the annealing procedure to be conducted with reduced computational expense. Additionally, Data Annealing uses a proposal distribution which allows it to conduct a local search accompanied by occasional long jumps, reducing the chance that it will become stuck in local traps. Here it is used to identify an experimental nonlinear system. The resulting Markov chains are used to approximate the covariance matrices of the parameters in a set of competing models before the issue of model selection is tackled using the Deviance Information Criterion.

  12. Re-fraction: a machine learning approach for deterministic identification of protein homologues and splice variants in large-scale MS-based proteomics.

    Science.gov (United States)

    Yang, Pengyi; Humphrey, Sean J; Fazakerley, Daniel J; Prior, Matthew J; Yang, Guang; James, David E; Yang, Jean Yee-Hwa

    2012-05-04

    A key step in the analysis of mass spectrometry (MS)-based proteomics data is the inference of proteins from identified peptide sequences. Here we describe Re-Fraction, a novel machine learning algorithm that enhances deterministic protein identification. Re-Fraction utilizes several protein physical properties to assign proteins to expected protein fractions that comprise large-scale MS-based proteomics data. This information is then used to appropriately assign peptides to specific proteins. This approach is sensitive, highly specific, and computationally efficient. We provide algorithms and source code for the current version of Re-Fraction, which accepts output tables from the MaxQuant environment. Nevertheless, the principles behind Re-Fraction can be applied to other protein identification pipelines where data are generated from samples fractionated at the protein level. We demonstrate the utility of this approach through reanalysis of data from a previously published study and generate lists of proteins deterministically identified by Re-Fraction that were previously only identified as members of a protein group. We find that this approach is particularly useful in resolving protein groups composed of splice variants and homologues, which are frequently expressed in a cell- or tissue-specific manner and may have important biological consequences.

  13. Deterministic indexing for packed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye

    2017-01-01

    Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ......, we show how to preprocess S in O(n) (deterministic) time and space O(n) such that given a packed pattern string of length m we can support queries in (deterministic) time O (m/α + log m + log log σ), where α = w/log σ is the number of characters packed in a word of size w = θ(log n). Our query time...

  14. Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems

    Science.gov (United States)

    Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad

    2014-04-01

    This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.

  15. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  16. Uniform deterministic dictionaries

    DEFF Research Database (Denmark)

    Ruzic, Milan

    2008-01-01

    We present a new analysis of the well-known family of multiplicative hash functions, and improved deterministic algorithms for selecting “good” hash functions. The main motivation is realization of deterministic dictionaries with fast lookups and reasonably fast updates. The model of computation...

  17. Deterministic Walks with Choice

    Energy Technology Data Exchange (ETDEWEB)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  18. Deterministic Discrepancy Minimization

    NARCIS (Netherlands)

    Bansal, N.; Spencer, J.

    2013-01-01

    We derandomize a recent algorithmic approach due to Bansal (Foundations of Computer Science, FOCS, pp. 3–10, 2010) to efficiently compute low discrepancy colorings for several problems, for which only existential results were previously known. In particular, we give an efficient deterministic algori

  19. Spurious deterministic seasonality

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); S. Hylleberg; H.S. Lee (Hahn)

    1995-01-01

    textabstractIt is sometimes assumed that the R2 of a regression of a first-order differenced time series on seasonal dummy variables reflects the amount of seasonal fluctuations that can be explained by deterministic variation in the series. In this paper we show that neglecting the presence of seas

  20. Enhancement of cooperation in the spatial prisoner's dilemma with a coherence-resonance effect through annealed randomness at a cooperator-defector boundary; comparison of two variant models

    Science.gov (United States)

    Tanimoto, Jun

    2016-11-01

    Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.

  1. Deterministic Global Optimization

    CERN Document Server

    Scholz, Daniel

    2012-01-01

    This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization

  2. Generalized Deterministic Traffic Rules

    CERN Document Server

    Fuks, H; Fuks, Henryk; Boccara, Nino

    1997-01-01

    We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parametrized by the speed limit $m$ and another parameter $k$ that represents a ``degree of aggressiveness'' in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: ``conservative'' driving with high speed limit and ``aggressive'' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.

  3. Schemes for Deterministic Polynomial Factoring

    CERN Document Server

    Ivanyos, Gábor; Saxena, Nitin

    2008-01-01

    In this work we relate the deterministic complexity of factoring polynomials (over finite fields) to certain combinatorial objects we call m-schemes. We extend the known conditional deterministic subexponential time polynomial factoring algorithm for finite fields to get an underlying m-scheme. We demonstrate how the properties of m-schemes relate to improvements in the deterministic complexity of factoring polynomials over finite fields assuming the generalized Riemann Hypothesis (GRH). In particular, we give the first deterministic polynomial time algorithm (assuming GRH) to find a nontrivial factor of a polynomial of prime degree n where (n-1) is a smooth number.

  4. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem....

  5. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem....

  6. Inferring deterministic causal relations

    CERN Document Server

    Daniusis, Povilas; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard

    2012-01-01

    We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.

  7. Deterministic behavioural models for concurrency

    DEFF Research Database (Denmark)

    Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn

    1993-01-01

    This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...

  8. Submicroscopic Deterministic Quantum Mechanics

    CERN Document Server

    Krasnoholovets, V

    2002-01-01

    So-called hidden variables introduced in quantum mechanics by de Broglie and Bohm have changed their initial enigmatic meanings and acquired quite reasonable outlines of real and measurable characteristics. The start viewpoint was the following: All the phenomena, which we observe in the quantum world, should reflect structural properties of the real space. Thus the scale 10^{-28} cm at which three fundamental interactions (electromagnetic, weak, and strong) intersect has been treated as the size of a building block of the space. The appearance of a massive particle is associated with a local deformation of the cellular space, i.e. deformation of a cell. The mechanics of a moving particle that has been constructed is deterministic by its nature and shows that the particle interacts with cells of the space creating elementary excitations called "inertons". The further study has disclosed that inertons are a substructure of the matter waves which are described by the orthodox wave \\psi-function formalism. The c...

  9. Modeling of deterministic chaotic systems

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Y. [Department of Physics and Astronomy and Department of Mathematics, The University of Kansas, Lawrence, Kansas 66045 (United States); Grebogi, C. [Institute for Plasma Research, University of Maryland, College Park, Maryland 20742 (United States); Grebogi, C.; Kurths, J. [Department of Physics and Astrophysics, Universitaet Potsdam, Postfach 601553, D-14415 Potsdam (Germany)

    1999-03-01

    The success of deterministic modeling of a physical system relies on whether the solution of the model would approximate the dynamics of the actual system. When the system is chaotic, situations can arise where periodic orbits embedded in the chaotic set have distinct number of unstable directions and, as a consequence, no model of the system produces reasonably long trajectories that are realized by nature. We argue and present physical examples indicating that, in such a case, though the model is deterministic and low dimensional, statistical quantities can still be reliably computed. {copyright} {ital 1999} {ital The American Physical Society}

  10. Interference Decoding for Deterministic Channels

    CERN Document Server

    Bandemer, Bernd

    2010-01-01

    An inner bound to the capacity region of a class of three user pair deterministic interference channels is presented. The key idea is to simultaneously decode the combined interference signal and the intended message at each receiver. It is shown that this interference decoding inner bound is strictly larger than the inner bound obtained by treating interference as noise, which includes interference alignment for deterministic channels. The gain comes from judicious analysis of the number of combined interference sequences in different regimes of input distributions and message rates.

  11. Deterministic joint remote state preparation

    Energy Technology Data Exchange (ETDEWEB)

    An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Bich, Cao Thi [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Physics Department, University of Education No. 1, 136 Xuan Thuy, Cau Giay, Hanoi (Viet Nam); Don, Nung Van [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Physics Department, Hanoi National University, 334 Nguyen Trai, Thanh Xuan, Hanoi (Viet Nam)

    2011-09-26

    We put forward a new nontrivial three-step strategy to execute joint remote state preparation via Einstein-Podolsky-Rosen pairs deterministically. At variance with all existing protocols, in ours the receiver contributes actively in both preparation and reconstruction steps, although he knows nothing about the quantum state to be prepared. -- Highlights: → Deterministic joint remote state preparation via EPR pairs is proposed. → Both general single- and two-qubit states are studied. → Differently from all existing protocols, in ours the receiver participates actively. → This is for the first time such a strategy is adopted.

  12. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  13. Deterministic extraction from weak random sources

    CERN Document Server

    Gabizon, Ariel

    2011-01-01

    In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.

  14. Analysis of FBC deterministic chaos

    Energy Technology Data Exchange (ETDEWEB)

    Daw, C.S.

    1996-06-01

    It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.

  15. Interference Alignment Using Variational Mean Field Annealing

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Guillaud, Maxime; Fleury, Bernard Henri

    2014-01-01

    We study the problem of interference alignment in the multiple-input multiple- output interference channel. Aiming at minimizing the interference leakage power relative to the receiver noise level, we use the deterministic annealing approach to solve the optimization problem. In the corresponding...... for interference alignment. We also show that the iterative leakage minimization algorithm by Gomadam et al. and the alternating minimization algorithm by Peters and Heath, Jr. are instances of our method. Finally, we assess the performance of the proposed algorithm through computer simulations....

  16. Deterministic Circular Self Test Path

    Institute of Scientific and Technical Information of China (English)

    WEN Ke; HU Yu; LI Xiaowei

    2007-01-01

    Circular self test path (CSTP) is an attractive technique for testing digital integrated circuits(IC) in the nanometer era, because it can easily provide at-speed test with small test data volume and short test application time. However, CSTP cannot reliably attain high fault coverage because of difficulty of testing random-pattern-resistant faults. This paper presents a deterministic CSTP (DCSTP) structure that consists of a DCSTP chain and jumping logic, to attain high fault coverage with low area overhead. Experimental results on ISCAS'89 benchmarks show that 100% fault coverage can be obtained with low area overhead and CPU time, especially for large circuits.

  17. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  18. Survivability of Deterministic Dynamical Systems

    Science.gov (United States)

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-07-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.

  19. The Deterministic Dendritic Cell Algorithm

    CERN Document Server

    Greensmith, Julie

    2010-01-01

    The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to anal- yse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.

  20. Deterministic Tripartite Controlled Remote State Preparation

    Science.gov (United States)

    Sang, Ming-huang; Nie, Yi-you

    2017-07-01

    We demonstrate that a seven-qubit entangled state can be used to realize the deterministic tripartite controlled remote state preparation by performing only Pauli operations and single-qubit measurements. In our scheme, three distant senders can simultaneously and deterministically exchange their quantum state with the other senders under the control of the supervisor.

  1. Piecewise deterministic Markov processes : an analytic approach

    NARCIS (Netherlands)

    Alkurdi, Taleb Salameh Odeh

    2013-01-01

    The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of

  2. Single Ion Implantation and Deterministic Doping

    Energy Technology Data Exchange (ETDEWEB)

    Schenkel, Thomas

    2010-06-11

    The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion

  3. Deterministic patterns in cell motility

    Science.gov (United States)

    Lavi, Ido; Piel, Matthieu; Lennon-Duménil, Ana-Maria; Voituriez, Raphaël; Gov, Nir S.

    2016-12-01

    Cell migration paths are generally described as random walks, associated with both intrinsic and extrinsic noise. However, complex cell locomotion is not merely related to such fluctuations, but is often determined by the underlying machinery. Cell motility is driven mechanically by actin and myosin, two molecular components that generate contractile forces. Other cell functions make use of the same components and, therefore, will compete with the migratory apparatus. Here, we propose a physical model of such a competitive system, namely dendritic cells whose antigen capture function and migratory ability are coupled by myosin II. The model predicts that this coupling gives rise to a dynamic instability, whereby cells switch from persistent migration to unidirectional self-oscillation, through a Hopf bifurcation. Cells can then switch to periodic polarity reversals through a homoclinic bifurcation. These predicted dynamic regimes are characterized by robust features that we identify through in vitro trajectories of dendritic cells over long timescales and distances. We expect that competition for limited resources in other migrating cell types can lead to similar deterministic migration modes.

  4. Accomplishing Deterministic XML Query Optimization

    Institute of Scientific and Technical Information of China (English)

    Dun-Ren Che

    2005-01-01

    As the popularity of XML (eXtensible Markup Language) keeps growing rapidly, the management of XML compliant structured-document databases has become a very interesting and compelling research area. Query optimization for XML structured-documents stands out as one of the most challenging research issues in this area because of the much enlarged optimization (search) space, which is a consequence of the intrinsic complexity of the underlying data model of XML data. We therefore propose to apply deterministic transformations on query expressions to most aggressively prune the search space and fast achieve a sufficiently improved alternative (if not the optimal) for each incoming query expression. This idea is not just exciting but practically attainable. This paper first provides an overview of our optimization strategy, and then focuses on the key implementation issues of our rule-based transformation system for XML query optimization in a database environment. The performance results we obtained from experimentation show that our approach is a valid and effective one.

  5. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  6. A Deterministic and Polynomial Modified Perceptron Algorithm

    Directory of Open Access Journals (Sweden)

    Olof Barr

    2006-01-01

    Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.

  7. Optimal Deterministic Auctions with Correlated Priors

    OpenAIRE

    Papadimitriou, Christos; Pierrakos, George

    2010-01-01

    We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction, resulting in a duality theorem and an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an NP-compl...

  8. Exploiting Deterministic TPG for Path Delay Testing

    Institute of Scientific and Technical Information of China (English)

    李晓维

    2000-01-01

    Detection of path delay faults requires two-pattern tests. BIST technique provides a low-cost test solution. This paper proposes an approach to designing a cost-effective deterministic test pattern generator (TPG) for path delay testing. Given a set of pre-generated test-pairs with pre-determined fault coverage, a deterministic TPG is synthesized to apply the given test-pair set in a limited test time. To achieve this objective, configurable linear feedback shift register (LFSR) structures are used. Techniques are developed to synthesize such a TPG, which is used to generate an unordered deterministic test-pair set. The resulting TPG is very efficient in terms of hardware size and speed performance. Simulation of academic benchmark circuits has given good results when compared to alternative solutions.

  9. Deterministic mediated superdense coding with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Pavičić, Mladen, E-mail: mpavicic@physik.hu-berlin.de [Department of Physics—Nanooptics, Faculty of Mathematics and Natural Sciences, Humboldt University of Berlin (Germany); Center of Excellence for Advanced Materials and Sensing Devices (CEMS), Photonics and Quantum Optics Unit, Ruđer Bošković Institute, Zagreb (Croatia)

    2016-02-22

    We present a scheme of deterministic mediated superdense coding of entangled photon states employing only linear-optics elements. Ideally, we are able to deterministically transfer four messages by manipulating just one of the photons. Two degrees of freedom, polarization and spatial, are used. A new kind of source of heralded down-converted photon pairs conditioned on detection of another pair with an efficiency of 92% is proposed. Realistic probabilistic experimental verification of the scheme with such a source of preselected pairs is feasible with today's technology. We obtain the channel capacity of 1.78 bits for a full-fledged implementation. - Highlights: • Deterministic linear optics mediated superdense coding is proposed. • Two degrees of freedom, polarization and spatial, are used. • Heralded source of conditioned entangled photon pairs, 92% efficient, is proposed.

  10. Optimal Deterministic Investment Strategies for Insurers

    Directory of Open Access Journals (Sweden)

    Ulrich Rieder

    2013-11-01

    Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.

  11. Neutron noise computation using panda deterministic code

    Energy Technology Data Exchange (ETDEWEB)

    Humbert, Ph. [CEA Bruyeres le Chatel (France)

    2003-07-01

    PANDA is a general purpose discrete ordinates neutron transport code with deterministic and non deterministic applications. In this paper we consider the adaptation of PANDA to stochastic neutron counting problems. More specifically we consider the first two moments of the count number probability distribution. In a first part we will recall the equations for the single neutron and source induced count number moments with the corresponding expression for the excess of relative variance or Feynman function. In a second part we discuss the numerical solution of these inhomogeneous adjoint time dependent transport coupled equations with discrete ordinate methods. Finally, numerical applications are presented in the third part. (author)

  12. Stochastic versus deterministic systems of differential equations

    CERN Document Server

    Ladde, G S

    2003-01-01

    This peerless reference/text unfurls a unified and systematic study of the two types of mathematical models of dynamic processes-stochastic and deterministic-as placed in the context of systems of stochastic differential equations. Using the tools of variational comparison, generalized variation of constants, and probability distribution as its methodological backbone, Stochastic Versus Deterministic Systems of Differential Equations addresses questions relating to the need for a stochastic mathematical model and the between-model contrast that arises in the absence of random disturbances/flu

  13. Cellulase variants

    Energy Technology Data Exchange (ETDEWEB)

    Blazej, Robert; Toriello, Nicholas; Emrich, Charles; Cohen, Richard N.; Koppel, Nitzan

    2015-07-14

    This invention provides novel variant cellulolytic enzymes having improved activity and/or stability. In certain embodiments the variant cellulotyic enzymes comprise a glycoside hydrolase with or comprising a substitution at one or more positions corresponding to one or more of residues F64, A226, and/or E246 in Thermobifida fusca Cel9A enzyme. In certain embodiments the glycoside hydrolase is a variant of a family 9 glycoside hydrolase. In certain embodiments the glycoside hydrolase is a variant of a theme B family 9 glycoside hydrolase.

  14. The mathematical basis for deterministic quantum mechanics

    NARCIS (Netherlands)

    Hooft, G. 't

    2006-01-01

    If there exists a classical, i.e. deterministic theory underlying quantum mechanics, an explanation must be found of the fact that the Hamiltonian, which is defined to be the operator that generates evolution in time, is bounded from below. The mechanism that can produce exactly such a constraint

  15. The mathematical basis for deterministic quantum mechanics

    NARCIS (Netherlands)

    Hooft, G. 't

    2007-01-01

    If there exists a classical, i.e. deterministic theory underlying quantum mechanics, an explanation must be found of the fact that the Hamiltonian, which is defined to be the operator that generates evolution in time, is bounded from below. The mechanism that can produce exactly such a constraint is

  16. DETERMINISTIC HOMOGENIZATION OF QUASILINEAR DAMPED HYPERBOLIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Gabriel Nguetseng; Hubert Nnang; Nils Svanstedt

    2011-01-01

    Deterministic homogenization is studied for quasilinear monotone hyperbolic problems with a linear damping term.It is shown by the sigma-convergence method that the sequence of solutions to a class of multi-scale highly oscillatory hyperbolic problems converges to the solution to a homogenized quasilinear hyperbolic problem.

  17. Deterministic Kalman filtering in a behavioral framework

    NARCIS (Netherlands)

    Fagnani, F; Willems, JC

    1997-01-01

    The purpose of this paper is to obtain a deterministic version of the Kalman filtering equations. We will use a behavioral description of the plant, specifically, an image representation. The resulting algorithm requires a matrix spectral factorization. We also show that the filter can be implemente

  18. A Gap Property of Deterministic Tree Languages

    DEFF Research Database (Denmark)

    Niwinski, Damian; Walukiewicz, Igor

    2003-01-01

    We show that a tree language recognized by a deterministic parity automaton is either hard for the co-Büchi level and therefore cannot be recognized by a weak alternating automaton, or is on a very low evel in the hierarchy of weak alternating automata. A topological counterpart of this property...

  19. From LTL and Limit-Deterministic B\\"uchi Automata to Deterministic Parity Automata

    OpenAIRE

    Esparza, Javier; Křetínský, Jan; Raskin, Jean-François; Sickert, Salomon

    2017-01-01

    Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic B\\"uchi a...

  20. Influence of Deterministic Attachments for Large Unifying Hybrid Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Large unifying hybrid network model (LUHPM) introduced the deterministic mixing ratio fd on the basis of the harmonious unification hybrid preferential model, to describe the influence of deterministic attachment to the network topology characteristics,

  1. A theoretical comparison of evolutionary algorithms and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E.

    1995-08-28

    This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.

  2. Rayleigh wave inversion using heat-bath simulated annealing algorithm

    Science.gov (United States)

    Lu, Yongxu; Peng, Suping; Du, Wenfeng; Zhang, Xiaoyang; Ma, Zhenyuan; Lin, Peng

    2016-11-01

    The dispersion of Rayleigh waves can be used to obtain near-surface shear (S)-wave velocity profiles. This is performed mainly by inversion of the phase velocity dispersion curves, which has been proven to be a highly nonlinear and multimodal problem, and it is unsuitable to use local search methods (LSMs) as the inversion algorithm. In this study, a new strategy is proposed based on a variant of simulated annealing (SA) algorithm. SA, which simulates the annealing procedure of crystalline solids in nature, is one of the global search methods (GSMs). There are many variants of SA, most of which contain two steps: the perturbation of model and the Metropolis-criterion-based acceptance of the new model. In this paper we propose a one-step SA variant known as heat-bath SA. To test the performance of the heat-bath SA, two models are created. Both noise-free and noisy synthetic data are generated. Levenberg-Marquardt (LM) algorithm and a variant of SA, known as the fast simulated annealing (FSA) algorithm, are also adopted for comparison. The inverted results of the synthetic data show that the heat-bath SA algorithm is a reasonable choice for Rayleigh wave dispersion curve inversion. Finally, a real-world inversion example from a coal mine in northwestern China is shown, which proves that the scheme we propose is applicable.

  3. Cellular non-deterministic automata and partial differential equations

    Science.gov (United States)

    Kohler, D.; Müller, J.; Wever, U.

    2015-09-01

    We define cellular non-deterministic automata (CNDA) in the spirit of non-deterministic automata theory. They are different from the well-known stochastic automata. We propose the concept of deterministic superautomata to analyze the dynamical behavior of a CNDA and show especially that a CNDA can be embedded in a deterministic cellular automaton. As an application we discuss a connection between certain partial differential equations and CNDA.

  4. Deterministic Real-time Thread Scheduling

    CERN Document Server

    Yun, Heechul; Sha, Lui

    2011-01-01

    Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.

  5. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  6. [Deterministic and stochastic identification of neurophysiologic systems].

    Science.gov (United States)

    Piatigorskiĭ, B Ia; Kostiukov, A I; Chinarov, V A; Cherkasskiĭ, V L

    1984-01-01

    The paper deals with deterministic and stochastic identification methods applied to the concrete neurophysiological systems. The deterministic identification was carried out for the system: efferent fibres-muscle. The obtained transition characteristics demonstrated dynamic nonlinearity of the system. Identification of the neuronal model and the "afferent fibres-synapses-neuron" system in mollusc Planorbis corneus was carried out using the stochastic methods. For these purpose the Wiener method of stochastic identification was expanded for the case of pulse trains as input and output signals. The weight of the nonlinear component in the Wiener model and accuracy of the model prediction were quantitatively estimated. The results obtained proves the possibility of using these identification methods for various neurophysiological systems.

  7. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  8. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  9. Microscopy with a Deterministic Single Ion Source

    CERN Document Server

    Jacob, Georg; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T; Poschinger, Ulrich G; Schmidt-Kaler, Ferdinand; Singer, Kilian

    2015-01-01

    We realize a single particle microscope by using deterministically extracted laser cooled $^{40}$Ca$^+$ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions with a resolution of 5.8$\\;\\pm\\;$1.0$\\,$nm and a minimum two-sample deviation of the beam position of 1.5$\\,$nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to much higher signal to noise ratios as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1$\\,\\mu$m circular hole structure to an accuracy of 2.7$\\,$nm using only 579 probe particles.

  10. Deterministic nonlinear systems a short course

    CERN Document Server

    Anishchenko, Vadim S; Strelkova, Galina I

    2014-01-01

    This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems.  This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.

  11. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  12. Deterministic Leader Election Among Disoriented Anonymous Sensors

    CERN Document Server

    dieudonné, Yoann; Petit, Franck; Villain, Vincent

    2012-01-01

    We address the Leader Election (LE) problem in networks of anonymous sensors sharing no kind of common coordinate system. Leader Election is a fundamental symmetry breaking problem in distributed computing. Its goal is to assign value 1 (leader) to one of the entities and value 0 (non-leader) to all others. In this paper, assuming n > 1 disoriented anonymous sensors, we provide a complete charac- terization on the sensors positions to deterministically elect a leader, provided that all the sensors' positions are known by every sensor. More precisely, our contribution is twofold: First, assuming n anonymous sensors agreeing on a common handedness (chirality) of their own coordinate system, we provide a complete characterization on the sensors positions to deterministically elect a leader. Second, we also provide such a complete chararacterization for sensors devoided of a common handedness. Both characterizations rely on a particular object from combinatorics on words, namely the Lyndon Words.

  13. Deterministic nanoassembly: Neutral or plasma route?

    Science.gov (United States)

    Levchenko, I.; Ostrikov, K.; Keidar, M.; Xu, S.

    2006-07-01

    It is shown that, owing to selective delivery of ionic and neutral building blocks directly from the ionized gas phase and via surface migration, plasma environments offer a better deal of deterministic synthesis of ordered nanoassemblies compared to thermal chemical vapor deposition. The results of hybrid Monte Carlo (gas phase) and adatom self-organization (surface) simulation suggest that higher aspect ratios and better size and pattern uniformity of carbon nanotip microemitters can be achieved via the plasma route.

  14. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....

  15. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song

    2001-01-01

    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  16. Deterministic definition of the capital risk

    OpenAIRE

    Anna Szczypinska; Piotrowski, Edward W.

    2008-01-01

    In this paper we propose a look at the capital risk problem inspired by deterministic, known from classical mechanics, problem of juggling. We propose capital equivalents to the Newton's laws of motion and on this basis we determine the most secure form of credit repayment with regard to maximisation of profit. Then we extend the Newton's laws to models in linear spaces of arbitrary dimension with the help of matrix rates of return. The matrix rates describe the evolution of multidimensional ...

  17. Schroedinger difference equation with deterministic ergodic potentials

    CERN Document Server

    Suto, Andras

    2012-01-01

    We review the recent developments in the theory of the one-dimensional tight-binding Schr\\"odinger equation for a class of deterministic ergodic potentials. In the typical examples the potentials are generated by substitutional sequences, like the Fibonacci or the Thue-Morse sequence. We concentrate on rigorous results which will be explained rather than proved. The necessary mathematical background is provided in the text.

  18. Wireless Network Information Flow: A Deterministic Approach

    CERN Document Server

    Avestimehr, Salman; Tse, David

    2009-01-01

    In contrast to wireline networks, not much is known about the flow of information over wireless networks. The main barrier is the complexity of the signal interaction in wireless channels in addition to the noise in the channel. A widely accepted model is the the additive Gaussian channel model, and for this model, the capacity of even a network with a single relay node is open for 30 years. In this paper, we present a deterministic approach to this problem by focusing on the signal interaction rather than the noise. To this end, we propose a deterministic channel model which is analytically simpler than the Gaussian model but still captures two key wireless channel properties of broadcast and superposition. We consider a model for a wireless relay network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and one or more destinations (all interested in the same information) and an arbitrary number of relay nodes....

  19. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody J. H.

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  20. Simulated annealing to handle energy and ancillary services joint management considering electric vehicles

    DEFF Research Database (Denmark)

    Sousa, Tiago M; Soares, Tiago; Morais, Hugo

    2016-01-01

    The massive use of distributed generation and electric vehicles will lead to a more complex management of the power system, requiring new approaches to be used in the optimal resource scheduling field. Electric vehicles with vehicle-to-grid capability can be useful for the aggregator players...... of the aggregator total operation costs. The case study considers a distribution network with 33-bus, 66 distributed generation and 2000 electric vehicles. The proposed simulated annealing is matched with a deterministic approach allowing an effective and efficient comparison. The simulated annealing presents...

  1. Deterministic properties of mine tremor aftershocks

    CSIR Research Space (South Africa)

    Kgarume, TE

    2010-10-01

    Full Text Available on Deep and High Stress Mining, 6-8 October 2010, Santiago CHILE Deterministic properties of mine tremor aftershocks T.E. Kgarume CSIR Centre for Mining Innovation and University of the Witwatersrand, South Africa S.M. Spottiswoode Consultant, South... : Dyke 7 7 : Span Figure 1 Simplified mine plan showing the main elements of stopes in South African gold mines 5th International Seminar on Deep and High Stress Mining, 6-8 October 2010, Santiago CHILE Table 1 Datasets used in the analysis...

  2. Explicit Protocol for Deterministic Entanglement Concentration

    Institute of Scientific and Technical Information of China (English)

    GU Yong-Jian; GAO Peng; GUO Guang-Can

    2005-01-01

    @@ We present an explicit protocol for extraction of an EPR pair from two partially entangled pairs in a deterministic fashion via local operations and classical communication. This protocol is constituted by a local measurement described by a positive operator-valued measure (POVM), one-way classical communication, and a corresponding local unitary operation or a choice between the two pairs. We explicitly construct the required POVM by the analysis of the doubly stochastic matrix connecting the initial and the final states. Our scheme might be useful in future quantum communication.

  3. Deterministic Thinning of Finite Poisson Processes

    CERN Document Server

    Angel, Omer; Soo, Terry

    2009-01-01

    Let Pi and Gamma be homogeneous Poisson point processes on a fixed set of finite volume. We prove a necessary and sufficient condition on the two intensities for the existence of a coupling of Pi and Gamma such that Gamma is a deterministic function of Pi, and all points of Gamma are points of Pi. The condition exhibits a surprising lack of monotonicity. However, in the limit of large intensities, the coupling exists if and only if the expected number of points is at least one greater in Pi than in Gamma.

  4. Experimental Demonstration of Deterministic Entanglement Transformation

    Institute of Scientific and Technical Information of China (English)

    CHEN Geng; XU Jin-Shi; LI Chuan-Feng; GONG Ming; CHEN Lei; GUO Guang-Can

    2009-01-01

    According to Nielsen's theorem [Phys.Rev.Lett.83 (1999) 436]and as a proof of principle,we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system.The output states are verified with a quantum tomography process.We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated.Our results may be useful in quantum information processing.

  5. multicast utilizando Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Yezid Donoso

    2005-01-01

    Full Text Available En este artículo se presenta un método de optimización multiobjetivo para la solución del problema de balanceo de carga en redes de transmisión multicast, apoyándose en la aplicación de la meta-heurística de Simulated Annealing (Recocido Simulado. El método minimiza cuatro parámetros básicos para garantizar la calidad de servicio en transmisiones multicast: retardo origen destino, máxima utilización de enlaces, ancho de banda consumido y número de saltos. Los resultados devueltos por la heurística serán comparados con los resultados arrojados por el modelo matemático propuesto en investigaciones anteriores.

  6. Chemical ordering in magnetic FePd/Pd (001) epitaxial thin films induced by annealing

    Science.gov (United States)

    Halley, D.; Gilles, B.; Bayle-Guillemaud, P.; Arenal, R.; Marty, A.; Patrat, G.; Samson, Y.

    2004-11-01

    Chemically disordered FePd epitaxial layers are grown at room temperature by molecular beam epitaxy on a Pd(001) buffer layer and then annealed in order to induce the chemically ordered L 10 (AuCu I) structure. Contrary to what is observed in the case of ordering during growth above room temperature, the ordered structure appears here with the three possible variants of the L 10 phase. The ratio of the three different variant volumes is set by the residual epitaxial strain in the layer before annealing. It thus explains that for long annealing times, the long-range order parameter associated with the L 10 variant with c along the (100) growth direction saturates at a value close to 0.65, and never reaches unity. Magnetic consequences of the ordering are studied.

  7. A mathematical theory for deterministic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)

    2007-05-15

    Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.

  8. Design of deterministic OS for SPLC

    Energy Technology Data Exchange (ETDEWEB)

    Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events.

  9. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  10. Deterministic Polynomial Factoring and Association Schemes

    CERN Document Server

    Arora, Manuel; Karpinski, Marek; Saxena, Nitin

    2012-01-01

    The problem of finding a nontrivial factor of a polynomial f(x) over a finite field F_q has many known efficient, but randomized, algorithms. The deterministic complexity of this problem is a famous open question even assuming the generalized Riemann hypothesis (GRH). In this work we improve the state of the art by focusing on prime degree polynomials; let n be the degree. If (n-1) has a `large' r-smooth divisor s, then we find a nontrivial factor of f(x) in deterministic poly(n^r,log q) time; assuming GRH and that s > sqrt{n/(2^r)}. Thus, for r = O(1) our algorithm is polynomial time. Further, for r > loglog n there are infinitely many prime degrees n for which our algorithm is applicable and better than the best known; assuming GRH. Our methods build on the algebraic-combinatorial framework of m-schemes initiated by Ivanyos, Karpinski and Saxena (ISSAC 2009). We show that the m-scheme on n points, implicitly appearing in our factoring algorithm, has an exceptional structure; leading us to the improved time ...

  11. Deterministic prediction of surface wind speed variations

    Science.gov (United States)

    Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.

    2014-11-01

    Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  12. The human ECG nonlinear deterministic versus stochastic aspects

    CERN Document Server

    Kantz, H; Kantz, Holger; Schreiber, Thomas

    1998-01-01

    We discuss aspects of randomness and of determinism in electrocardiographic signals. In particular, we take a critical look at attempts to apply methods of nonlinear time series analysis derived from the theory of deterministic dynamical systems. We will argue that deterministic chaos is not a likely explanation for the short time variablity of the inter-beat interval times, except for certain pathologies. Conversely, densely sampled full ECG recordings possess properties typical of deterministic signals. In the latter case, methods of deterministic nonlinear time series analysis can yield new insights.

  13. Deterministic, Nanoscale Fabrication of Mesoscale Objects

    Energy Technology Data Exchange (ETDEWEB)

    Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M

    2004-12-08

    Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF

  14. Molten salt reactor: Deterministic safety evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Merle-Lucotte, Elsa; Heuer, Daniel; Mathieu, Ludovic; Le Brun, Christian [Laboratory for Subatomic Physics and Cosmology (LPSC), 53, Avenue des Marthyrs, F-38026 Grenoble (France)

    2006-07-01

    Molten Salt Reactors (MSRs) are one of the systems retained by Generation IV as a candidate for the next generation of nuclear reactors. This type of reactor is particularly well adapted to the thorium fuel cycle (Th- {sup 233}U) which has the advantage of producing less minor actinides than the uranium-plutonium fuel cycle ({sup 238}U- {sup 239}Pu). In the frame of a major re-evaluation of the MSR concept and concentrating on some major constraints such as feasibility, breeding capability and, above all, safety, we have considered a particular reactor configuration that we call the 'unique channel' configuration in which there is no moderator in the core, leading to a quasi fast neutron spectrum. This reactor is presented in the first section. MSRs benefit from several specific advantages which are listed in a second part of this work. Beyond these advantages of the MSR, the level of the deterministic safety in such a reactor has to be assessed precisely. In a third section, we first draw up a list of the reactivity margins in our reactor configuration. We then define and quantify the parameters characterizing the deterministic safety of any reactor: the fraction of delayed neutrons, and the system's feedback coefficients that are here negative. Finally, using a simple point-kinetic evaluation, we analyze how these safety parameters impact the system when the total reactivity margins are introduced in the MSR. The results of this last study are discussed, emphasizing the satisfactory behavior of the MSR and the excellent level of deterministic safety which can be achieved. This work is based on the coupling of a neutron transport code called MCNP with a materials evolution code. The former calculates the neutron flux and the reaction rates in all the cells while the latter solves the Bateman equations for the evolution of the materials composition within the cells. These calculations take into account the input parameters (power released

  15. Deterministic remote preparation via the Brown state

    Science.gov (United States)

    Ma, Song-Ya; Gao, Cong; Zhang, Pei; Qu, Zhi-Guo

    2017-04-01

    We propose two deterministic remote state preparation (DRSP) schemes by using the Brown state as the entangled channel. Firstly, the remote preparation of an arbitrary two-qubit state is considered. It is worth mentioning that the construction of measurement bases plays a key role in our scheme. Then, the remote preparation of an arbitrary three-qubit state is investigated. The proposed schemes can be extended to controlled remote state preparation (CRSP) with unit success probabilities. At variance with the existing CRSP schemes via the Brown state, the derived schemes have no restriction on the coefficients, while the success probabilities can reach 100%. It means the success probabilities are greatly improved. Moreover, we pay attention to the DRSP in noisy environments under two important decoherence models, the amplitude-damping noise and phase-damping noise.

  16. Deterministic phase slips in mesoscopic superconducting rings

    Science.gov (United States)

    Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.

    2016-11-01

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.

  17. Primality deterministic and primality probabilistic tests

    Directory of Open Access Journals (Sweden)

    Alfredo Rizzi

    2007-10-01

    Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.

  18. Deterministic polarization chaos from a laser diode

    CERN Document Server

    Virte, Martin; Thienpont, Hugo; Sciamanna, Marc

    2014-01-01

    Fifty years after the invention of the laser diode and fourty years after the report of the butterfly effect - i.e. the unpredictability of deterministic chaos, it is said that a laser diode behaves like a damped nonlinear oscillator. Hence no chaos can be generated unless with additional forcing or parameter modulation. Here we report the first counter-example of a free-running laser diode generating chaos. The underlying physics is a nonlinear coupling between two elliptically polarized modes in a vertical-cavity surface-emitting laser. We identify chaos in experimental time-series and show theoretically the bifurcations leading to single- and double-scroll attractors with characteristics similar to Lorenz chaos. The reported polarization chaos resembles at first sight a noise-driven mode hopping but shows opposite statistical properties. Our findings open up new research areas that combine the high speed performances of microcavity lasers with controllable and integrated sources of optical chaos.

  19. Anisotropic permeability in deterministic lateral displacement arrays

    CERN Document Server

    Vernekar, Rohan; Loutherback, Kevin; Morton, Keith; Inglis, David

    2016-01-01

    We investigate anisotropic permeability of microfluidic deterministic lateral displacement (DLD) arrays. A DLD array can achieve high-resolution bimodal size-based separation of micro-particles, including bioparticles such as cells. Correct operation requires that the fluid flow remains at a fixed angle with respect to the periodic obstacle array. We show via experiments and lattice-Boltzmann simulations that subtle array design features cause anisotropic permeability. The anisotropy, which indicates the array's intrinsic tendency to induce an undesired lateral pressure gradient, can lead to off-axis flows and therefore local changes in the critical separation size. Thus, particle trajectories can become unpredictable and the device useless for the desired separation duty. We show that for circular posts the rotated-square layout, unlike the parallelogram layout, does not suffer from anisotropy and is the preferred geometry. Furthermore, anisotropy becomes severe for arrays with unequal axial and lateral gaps...

  20. Deterministic aspects of nonlinear modulation instability

    CERN Document Server

    van Groesen, E; Karjanto, N

    2011-01-01

    Different from statistical considerations on stochastic wave fields, this paper aims to contribute to the understanding of (some of) the underlying physical phenomena that may give rise to the occurrence of extreme, rogue, waves. To that end a specific deterministic wavefield is investigated that develops extreme waves from a uniform background. For this explicitly described nonlinear extension of the Benjamin-Feir instability, the soliton on finite background of the NLS equation, the global down-stream evolving distortions, the time signal of the extreme waves, and the local evolution near the extreme position are investigated. As part of the search for conditions to obtain extreme waves, we show that the extreme wave has a specific optimization property for the physical energy, and comment on the possible validity for more realistic situations.

  1. Mechanics From Newton's Laws to Deterministic Chaos

    CERN Document Server

    Scheck, Florian

    2010-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...

  2. Austenite formation during intercritical annealing

    OpenAIRE

    A. Lis; J. Lis

    2008-01-01

    Purpose: of this paper is the effect of the soft annealing of initial microstructure of the 6Mn16 steel on the kinetics of the austenite formation during next intercritical annealing.Design/methodology/approach: Analytical TEM point analysis with EDAX system attached to Philips CM20 was used to evaluate the concentration of Mn, Ni and Cr in the microstructure constituents of the multiphase steel and mainly Bainite- Martensite islands.Findings: The increase in soft annealing time from 1-60 hou...

  3. Deterministic seismic hazard macrozonation of India

    Science.gov (United States)

    Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  4. Deterministic seismic hazard macrozonation of India

    Indian Academy of Sciences (India)

    Sreevalsa Kolathayar; T G Sitharam; K S Vipin

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  5. Deterministic and risk-informed approaches for safety analysis of advanced reactors: Part I, deterministic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Kyu [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Kim, Inn Seock, E-mail: innseockkim@gmail.co [ISSA Technology, 21318 Seneca Crossing Drive, Germantown, MD 20876 (United States); Oh, Kyu Myung [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)

    2010-05-15

    The objective of this paper and a companion paper in this issue (part II, risk-informed approaches) is to derive technical insights from a critical review of deterministic and risk-informed safety analysis approaches that have been applied to develop licensing requirements for water-cooled reactors, or proposed for safety verification of the advanced reactor design. To this end, a review was made of a number of safety analysis approaches including those specified in regulatory guides and industry standards, as well as novel methodologies proposed for licensing of advanced reactors. This paper and the companion paper present the review insights on the deterministic and risk-informed safety analysis approaches, respectively. These insights could be used in making a safety case or developing a new licensing review infrastructure for advanced reactors including Generation IV reactors.

  6. Some variants of SAT and their properties

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new model for the well-known problem, the satisfiablility problem of boolean formula (SAT), is introduced. Based on this model, some variants of SAT and their properties are presented. Denote by NP the class of all languages which can be decided by a non-deterministic polynomial Turing machine and by P the class of all languages which can be decided by a deterministic polynomial-time Turing machine. This model also allows us to give another candidate for the natural problems in ((NP-NPC)-P), denoted as NPI, under the assumption P≠NP, where NPC represents NP-complete. It is proven that this candidate is not in NPC under P≠NP. While, it is indeed in NPI under some stronger but reasonable assumption, specifically, under the Exponential-Time Hypothesis (ETH). Thus we can partially solve this long standing important open problem.

  7. The degree of irreversibility in deterministic finite automata

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Holzer, Markus; Kutrib, Martin

    2016-01-01

    Recently, Holzer et al. gave a method to decide whether the language accepted by a given deterministic finite automaton (DFA) can also be accepted by some reversible deterministic finite automaton (REV-DFA), and eventually proved NL-completeness. Here, we show that the corresponding problem for n...

  8. Recognition of deterministic ETOL languages in logarithmic space

    DEFF Research Database (Denmark)

    Jones, Neil D.; Skyum, Sven

    1977-01-01

    It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian par...

  9. Safety Verification of Piecewise-Deterministic Markov Processes

    DEFF Research Database (Denmark)

    Wisniewski, Rafael; Sloth, Christoffer; Bujorianu, Manuela

    2016-01-01

    We consider the safety problem of piecewise-deterministic Markov processes (PDMP). These are systems that have deterministic dynamics and stochastic jumps, where both the time and the destination of the jumps are stochastic. Specifically, we solve a p-safety problem, where we identify the set...

  10. Use of deterministic models in sports and exercise biomechanics research.

    Science.gov (United States)

    Chow, John W; Knudson, Duane V

    2011-09-01

    A deterministic model is a modeling paradigm that determines the relationships between a movement outcome measure and the biomechanical factors that produce such a measure. This review provides an overview of the use of deterministic models in biomechanics research, a historical summary of this research, and an analysis of the advantages and disadvantages of using deterministic models. The deterministic model approach has been utilized in technique analysis over the last three decades, especially in swimming, athletics field events, and gymnastics. In addition to their applications in sports and exercise biomechanics, deterministic models have been applied successfully in research on selected motor skills. The advantage of the deterministic model approach is that it helps to avoid selecting performance or injury variables arbitrarily and to provide the necessary theoretical basis for examining the relative importance of various factors that influence the outcome of a movement task. Several disadvantages of deterministic models, such as the use of subjective measures for the performance outcome, were discussed. It is recommended that exercise and sports biomechanics scholars should consider using deterministic models to help identify meaningful dependent variables in their studies.

  11. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...

  12. Radiation annealing in cuprous oxide

    DEFF Research Database (Denmark)

    Vajda, P.

    1966-01-01

    Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square-root relati......-root relationship between the rate of change of resistivity and the resistivity change. The saturation defect density at room temperature is estimated on the basis of a model for defect creation in cuprous oxide....

  13. Quantum annealing with manufactured spins.

    Science.gov (United States)

    Johnson, M W; Amin, M H S; Gildert, S; Lanting, T; Hamze, F; Dickson, N; Harris, R; Berkley, A J; Johansson, J; Bunyk, P; Chapple, E M; Enderud, C; Hilton, J P; Karimi, K; Ladizinsky, E; Ladizinsky, N; Oh, T; Perminov, I; Rich, C; Thom, M C; Tolkacheva, E; Truncik, C J S; Uchaikin, S; Wang, J; Wilson, B; Rose, G

    2011-05-12

    Many interesting but practically intractable problems can be reduced to that of finding the ground state of a system of interacting spins; however, finding such a ground state remains computationally difficult. It is believed that the ground state of some naturally occurring spin systems can be effectively attained through a process called quantum annealing. If it could be harnessed, quantum annealing might improve on known methods for solving certain types of problem. However, physical investigation of quantum annealing has been largely confined to microscopic spins in condensed-matter systems. Here we use quantum annealing to find the ground state of an artificial Ising spin system comprising an array of eight superconducting flux quantum bits with programmable spin-spin couplings. We observe a clear signature of quantum annealing, distinguishable from classical thermal annealing through the temperature dependence of the time at which the system dynamics freezes. Our implementation can be configured in situ to realize a wide variety of different spin networks, each of which can be monitored as it moves towards a low-energy configuration. This programmable artificial spin network bridges the gap between the theoretical study of ideal isolated spin networks and the experimental investigation of bulk magnetic samples. Moreover, with an increased number of spins, such a system may provide a practical physical means to implement a quantum algorithm, possibly allowing more-effective approaches to solving certain classes of hard combinatorial optimization problems.

  14. Human gait recognition via deterministic learning.

    Science.gov (United States)

    Zeng, Wei; Wang, Cong

    2012-11-01

    Recognition of temporal/dynamical patterns is among the most difficult pattern recognition tasks. Human gait recognition is a typical difficulty in the area of dynamical pattern recognition. It classifies and identifies individuals by their time-varying gait signature data. Recently, a new dynamical pattern recognition method based on deterministic learning theory was presented, in which a time-varying dynamical pattern can be effectively represented in a time-invariant manner and can be rapidly recognized. In this paper, we present a new model-based approach for human gait recognition via the aforementioned method, specifically for recognizing people by gait. The approach consists of two phases: a training (learning) phase and a test (recognition) phase. In the training phase, side silhouette lower limb joint angles and angular velocities are selected as gait features. A five-link biped model for human gait locomotion is employed to demonstrate that functions containing joint angle and angular velocity state vectors characterize the gait system dynamics. Due to the quasi-periodic and symmetrical characteristics of human gait, the gait system dynamics can be simplified to be described by functions of joint angles and angular velocities of one side of the human body, thus the feature dimension is effectively reduced. Locally-accurate identification of the gait system dynamics is achieved by using radial basis function (RBF) neural networks (NNs) through deterministic learning. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. A gait signature is then derived from the extracted gait system dynamics along the phase portrait of joint angles versus angular velocities. A bank of estimators is constructed using constant RBF networks to represent the training gait patterns. In the test phase, by comparing the set of estimators with the test gait pattern, a set of recognition errors are generated, and the average L(1) norms

  15. Relaxation of the EM Algorithm via Quantum Annealing

    CERN Document Server

    Miyahara, Hideyuki

    2016-01-01

    The EM algorithm is a novel numerical method to obtain maximum likelihood estimates and is often used for practical calculations. However, many of maximum likelihood estimation problems are nonconvex, and it is known that the EM algorithm fails to give the optimal estimate by being trapped by local optima. In order to deal with this difficulty, we propose a deterministic quantum annealing EM algorithm by introducing the mathematical mechanism of quantum fluctuations into the conventional EM algorithm because quantum fluctuations induce the tunnel effect and are expected to relax the difficulty of nonconvex optimization problems in the maximum likelihood estimation problems. We show a theorem that guarantees its convergence and give numerical experiments to verify its efficiency.

  16. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  17. Deterministically Driven Avalanche Models of Solar Flares

    Science.gov (United States)

    Strugarek, Antoine; Charbonneau, Paul; Joseph, Richard; Pirot, Dorian

    2014-08-01

    We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick-slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy-loading process. The model design leads to a systematic deficit of small-scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global statistics of flare energy release, this latter behavior may be relevant to recurrent flaring in individual coronal loops. This class of models could provide a basis for the prediction of large solar flares.

  18. Deterministically Driven Avalanche Models of Solar Flares

    CERN Document Server

    Strugarek, Antoine; Joseph, Richard; Pirot, Dorian

    2014-01-01

    We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick--slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy loading process. The model design leads to a systematic deficit of small scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global st...

  19. Deterministic Secure Positioning in Wireless Sensor Networks

    CERN Document Server

    Delaët, Sylvie; Rokicki, Mariusz; Tixeuil, Sébastien

    2007-01-01

    Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does \\emph{not} rely on a subset of \\emph{trusted} nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most $\\lfloor \\frac{n}{2} \\rfloor-2$ faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most $\\lfloor \\frac{n}{2} \\rfloor...

  20. Deterministic Random Walks on Regular Trees

    CERN Document Server

    Cooper, Joshua; Friedrich, Tobias; Spencer, Joel; 10.1002/rsa.20314

    2010-01-01

    Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid $\\Z^d$ and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite $k$-ary tree ($k \\ge 3$), we show that for any deviation $D$ there is an initial configuration of chips such that after running the Propp model for a ...

  1. Analysis of pinching in deterministic particle separation

    Science.gov (United States)

    Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German

    2011-11-01

    We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.

  2. Traffic chaotic dynamics modeling and analysis of deterministic network

    Science.gov (United States)

    Wu, Weiqiang; Huang, Ning; Wu, Zhitao

    2016-07-01

    Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.

  3. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.

  4. Optical Realization of Deterministic Entanglement Concentration of Polarized Photons

    Institute of Scientific and Technical Information of China (English)

    GU Yong-Jian; XIAN Liang; LI Wen-Dong; MA Li-Zhen

    2008-01-01

    @@ We propose a scheme for optical realization of deterministic entanglement concentration of polarized photons.To overcome the difficulty due to the lack of sufficiently strong interactions between photons, teleportation is employed to transfer the polarization states of two photons onto the path and polarization states of a third photon, which is made possible by the recent experimental realization of the deterministic and complete Bell state measurement. Then the required positive operator-valued measurement and further operations can be implemented deterministically by using a linear optical setup. All these are within the reach of current technology.

  5. Thermal Annealing of Exfoliated Graphene

    Directory of Open Access Journals (Sweden)

    Wang Xueshen

    2013-01-01

    Full Text Available Monolayer graphene is obtained by mechanical exfoliation using scotch tapes. The effects of thermal annealing on the tape residues and edges of graphene are researched. Atomic force microscope images showed that almost all the residues could be removed in N2/H2 at 400°C but only agglomerated in vacuum. Raman spectra of the annealed graphene show both the 2D peak and G peak blueshift. The full width at half maximum (FWHM of the 2D peak becomes larger and the intensity ratio of the 2D peak to G peak decreases. The edges of graphene are completely attached to the surface of the substrate after annealing.

  6. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  7. A Method to Separate Stochastic and Deterministic Information from Electrocardiograms

    CERN Document Server

    Gutíerrez, R M

    2004-01-01

    In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.

  8. A proof system for asynchronously communicating deterministic processes

    NARCIS (Netherlands)

    de Boer, F.S.|info:eu-repo/dai/nl/072666641; van Hulst, M.

    1994-01-01

    We introduce in this paper new communication and synchronization constructs which allow deterministic processes, communicating asynchronously via unbounded FIFO buffers, to cope with an indeterminate environment. We develop for the resulting parallel programming language, which subsumes deterministi

  9. A proof system for asynchronously communicating deterministic processes

    NARCIS (Netherlands)

    de Boer, F.S.; van Hulst, M.

    1994-01-01

    We introduce in this paper new communication and synchronization constructs which allow deterministic processes, communicating asynchronously via unbounded FIFO buffers, to cope with an indeterminate environment. We develop for the resulting parallel programming language, which subsumes deterministi

  10. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  11. A Review of Deterministic Optimization Methods in Engineering and Management

    Directory of Open Access Journals (Sweden)

    Ming-Hua Lin

    2012-01-01

    Full Text Available With the increasing reliance on modeling optimization problems in practical applications, a number of theoretical and algorithmic contributions of optimization have been proposed. The approaches developed for treating optimization problems can be classified into deterministic and heuristic. This paper aims to introduce recent advances in deterministic methods for solving signomial programming problems and mixed-integer nonlinear programming problems. A number of important applications in engineering and management are also reviewed to reveal the usefulness of the optimization methods.

  12. Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    OpenAIRE

    Aviram, Amittai; Ford, Bryan

    2009-01-01

    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...

  13. Annealing properties of rice starch.

    Science.gov (United States)

    Thermal properties of starch can be modified by annealing, i.e., a pre-treatment in excessive amounts of water at temperatures below the gelatinization temperatures. This treatment is known to improve the crystalline properties, and is a useful tool to gain a better control of the functional proper...

  14. Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.

    Science.gov (United States)

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.

  15. ACTIVITY-BASED COSTING DAN SIMULATED ANNEALING UNTUK PENCARIAN RUTE PADA FLEXIBLE MANUFACTURING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Gregorius Satia Budhi

    2003-01-01

    Full Text Available Flexible Manufacturing System (FMS is a manufacturing system that is formed from several Numerical Controlled Machines combine with material handling system, so that different jobs can be worked by different machines sequences. FMS combine the high productivity and flexibility of Transfer Line and Job Shop manufacturing system. In this reasearch, Activity-Based Costing(ABC approach was used as the weight to search the operation route in the proper machine, so that the total production cost can be optimized. The search method that was used in this experiment is Simulated Annealling, a variant form Hill Climbing Search method. An ideal operation time to proses a part was used as the annealling schedule. From the empirical test, it could be proved that the use of ABC approach and Simulated Annealing to search the route (routing process can optimize the Total Production Cost. In the other hand, the use of ideal operation time to process a part as annealing schedule can control the processing time well. Abstract in Bahasa Indonesia : Flexible Manufacturing System (FMS adalah sistem manufaktur yang tersusun dari mesin-mesin Numerical Control (NC yang dikombinasi dengan Sistem Penanganan Material, sehingga job-job berbeda dikerjakan oleh mesin-mesin dengan alur yang berlainan. FMS menggabungkan produktifitas dan fleksibilitas yang tinggi dari Sistem Manufaktur Transfer Line dan Job Shop. Pada riset ini pendekatan Activity-Based Costing (ABC digunakan sebagai bobot / weight dalam pencarian rute operasi pada mesin yang tepat, untuk lebih mengoptimasi biaya produksi secara keseluruhan. Adapun metode Searching yang digunakan adalah Simulated Annealing yang merupakan varian dari metode searching Hill Climbing. Waktu operasi ideal untuk memproses sebuah part digunakan sebagai Annealing Schedulenya. Dari hasil pengujian empiris dapat dibuktikan bahwa penggunaan pendekatan ABC dan Simulated Annealing untuk proses pencarian rute (routing dapat lebih

  16. Feasibility of Simulated Annealing Tomography

    CERN Document Server

    Vo, Nghia T; Moser, Herbert O

    2014-01-01

    Simulated annealing tomography (SAT) is a simple iterative image reconstruction technique which can yield a superior reconstruction compared with filtered back-projection (FBP). However, the very high computational cost of iteratively calculating discrete Radon transform (DRT) has limited the feasibility of this technique. In this paper, we propose an approach based on the pre-calculated intersection lengths array (PILA) which helps to remove the step of computing DRT in the simulated annealing procedure and speed up SAT by over 300 times. The enhancement of convergence speed of the reconstruction process using the best of multiple-estimate (BoME) strategy is introduced. The performance of SAT under different conditions and in comparison with other methods is demonstrated by numerical experiments.

  17. A NEW DETERMINISTIC FORMULATION FOR DYNAMIC STOCHASTIC PROGRAMMING PROBLEMS AND ITS NUMERICAL COMPARISON WITH OTHERS

    Institute of Scientific and Technical Information of China (English)

    陈志平

    2003-01-01

    A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We then check the impact of the new deterministic formulation and other two deterministic formulations on the corresponding problem size,nonzero elements and solution time by solving some typical dynamic stochastic programming problems with different interior point algorithms.Numerical results show the advantage and application of the new deterministic formulation.

  18. Recursive simulation of quantum annealing

    CERN Document Server

    Sowa, A P; Samson, J H; Savel'ev, S E; Zagoskin, A M; Heidel, S; Zúñiga-Anaya, J C

    2015-01-01

    The evaluation of the performance of adiabatic annealers is hindered by lack of efficient algorithms for simulating their behaviour. We exploit the analyticity of the standard model for the adiabatic quantum process to develop an efficient recursive method for its numerical simulation in case of both unitary and non-unitary evolution. Numerical simulations show distinctly different distributions for the most important figure of merit of adiabatic quantum computing --- the success probability --- in these two cases.

  19. Residual entropy and simulated annealing

    OpenAIRE

    Ettelaie, R.; Moore, M. A.

    1985-01-01

    Determining the residual entropy in the simulated annealing approach to optimization is shown to provide useful information on the true ground state energy. The one-dimensional Ising spin glass is studied to exemplify the procedure and in this case the residual entropy is related to the number of one-spin flip stable metastable states. The residual entropy decreases to zero only logarithmically slowly with the inverse cooling rate.

  20. Simulated annealing model of acupuncture

    Science.gov (United States)

    Shang, Charles; Szu, Harold

    2015-05-01

    The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.

  1. Deterministic dynamics of neural activity during absence seizures in rats

    Science.gov (United States)

    Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A.

    2009-04-01

    The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.

  2. DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS

    Energy Technology Data Exchange (ETDEWEB)

    J. E. MOREL

    1999-06-01

    The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.

  3. Deterministic and stochastic features of rhythmic human movement.

    Science.gov (United States)

    van Mourik, Anke M; Daffertshofer, Andreas; Beek, Peter J

    2006-03-01

    The dynamics of rhythmic movement has both deterministic and stochastic features. We advocate a recently established analysis method that allows for an unbiased identification of both types of system components. The deterministic components are revealed in terms of drift coefficients and vector fields, while the stochastic components are assessed in terms of diffusion coefficients and ellipse fields. The general principles of the procedure and its application are explained and illustrated using simulated data from known dynamical systems. Subsequently, we exemplify the method's merits in extracting deterministic and stochastic aspects of various instances of rhythmic movement, including tapping, wrist cycling and forearm oscillations. In particular, it is shown how the extracted numerical forms can be analysed to gain insight into the dependence of dynamical properties on experimental conditions.

  4. Estimating the epidemic threshold on networks by deterministic connections

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu [School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004 (China); Fu, Xinchu [Department of Mathematics, Shanghai University, Shanghai 200444 (China); Small, Michael [School of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009 (Australia)

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.

  5. The Deterministic Part of IPC-4: An Overview

    CERN Document Server

    Edelkamp, S; 10.1613/jair.1677

    2011-01-01

    We provide an overview of the organization and results of the deterministic part of the 4th International Planning Competition, i.e., of the part concerned with evaluating systems doing deterministic planning. IPC-4 attracted even more competing systems than its already large predecessors, and the competition event was revised in several important respects. After giving an introduction to the IPC, we briefly explain the main differences between the deterministic part of IPC-4 and its predecessors. We then introduce formally the language used, called PDDL2.2 that extends PDDL2.1 by derived predicates and timed initial literals. We list the competing systems and overview the results of the competition. The entire set of data is far too large to be presented in full. We provide a detailed summary; the complete data is available in an online appendix. We explain how we awarded the competition prizes.

  6. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  7. Deterministic sensing matrices in compressive sensing: a survey.

    Science.gov (United States)

    Nguyen, Thu L N; Shin, Yoan

    2013-01-01

    Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.

  8. Deterministic and stochastic error bounds in numerical analysis

    CERN Document Server

    Novak, Erich

    1988-01-01

    In these notes different deterministic and stochastic error bounds of numerical analysis are investigated. For many computational problems we have only partial information (such as n function values) and consequently they can only be solved with uncertainty in the answer. Optimal methods and optimal error bounds are sought if only the type of information is indicated. First, worst case error bounds and their relation to the theory of n-widths are considered; special problems such approximation, optimization, and integration for different function classes are studied and adaptive and nonadaptive methods are compared. Deterministic (worst case) error bounds are often unrealistic and should be complemented by different average error bounds. The error of Monte Carlo methods and the average error of deterministic methods are discussed as are the conceptual difficulties of different average errors. An appendix deals with the existence and uniqueness of optimal methods. This book is an introduction to the area and a...

  9. Deterministic Quantum Key Distribution Using Gaussian-Modulated Squeezed States

    Institute of Scientific and Technical Information of China (English)

    何广强; 朱俊; 曾贵华

    2011-01-01

    A continuous variable ping-pong scheme, which is utilized to generate deterministic private key, is proposed. The proposed scheme is implemented physically by using Ganssian-modulated squeezed states. The deterministic char- acteristic, i.e., no basis reconciliation between two parties, leads a nearly two-time efficiency comparing to the standard quantum key distribution schemes. Especially, the separate control mode does not need in the proposed scheme so that it is simpler and more available than previous ping-pong schemes. The attacker may be detected easily through the fidelity of the transmitted signal, and may not be successful in the beam splitter attack strategy.

  10. MIMO capacity for deterministic channel models: sublinear growth

    DEFF Research Database (Denmark)

    Bentosela, Francois; Cornean, Horia; Marchetti, Nicola

    2013-01-01

    This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....

  11. Structural and Spectral Properties of Deterministic Aperiodic Optical Structures

    Directory of Open Access Journals (Sweden)

    Luca Dal Negro

    2016-12-01

    Full Text Available In this comprehensive paper we have addressed structure-property relationships in a number of representative systems with periodic, random, quasi-periodic and deterministic aperiodic geometry using the interdisciplinary methods of spatial point pattern analysis and spectral graph theory as well as the rigorous Green’s matrix method, which provides access to the electromagnetic scattering behavior and spectral fluctuations (distributions of complex eigenvalues as well as of their level spacing of deterministic aperiodic optical media for the first time.

  12. Deterministic approaches for noncoherent communications with chaotic carriers

    Institute of Scientific and Technical Information of China (English)

    Liu Xiongying; Qiu Shuisheng; Francis. C. M. Lau

    2005-01-01

    Two problems are proposed. The first one is the noise decontamination of chaotic carriers using a deterministic approach to reconstruct pseudo trajectories, the second is the design of communications schemes with chaotic carriers. After presenting our deterministic noise decontamination algorithm, conventional chaos shift keying (CSK) communication system is applied. The difference of Euclidean distance between noisy trajectory and decontaminated trajectory in phase space could be utilized to non-coherently detect the sent symbol simply and effectively. It is shown that this detection method can achieve the bit error rate performance comparable to other non-coherent systems.

  13. Deterministic extinction by mixing in cyclically competing species

    Science.gov (United States)

    Feldager, Cilie W.; Mitarai, Namiko; Ohta, Hiroki

    2017-03-01

    We consider a cyclically competing species model on a ring with global mixing at finite rate, which corresponds to the well-known Lotka-Volterra equation in the limit of infinite mixing rate. Within a perturbation analysis of the model from the infinite mixing rate, we provide analytical evidence that extinction occurs deterministically at sufficiently large but finite values of the mixing rate for any species number N ≥3 . Further, by focusing on the cases of rather small species numbers, we discuss numerical results concerning the trajectories toward such deterministic extinction, including global bifurcations caused by changing the mixing rate.

  14. Strong white photoluminescence from annealed zeolites

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Zhenhua, E-mail: baizh46@gmail.com [School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore 637457 (Singapore); Fujii, Minoru; Imakita, Kenji; Hayashi, Shinji [Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, Rokkodai, Nada, Kobe 657-8501 (Japan)

    2014-01-15

    The optical properties of zeolites annealed at various temperatures are investigated for the first time. The annealed zeolites exhibit strong white photoluminescence (PL) under ultraviolet light excitation. With increasing annealing temperature, the emission intensity of annealed zeolites first increases and then decreases. At the same time, the PL peak red-shifts from 495 nm to 530 nm, and then returns to 500 nm. The strongest emission appears when the annealing temperature is 500 °C. The quantum yield of the sample is measured to be ∼10%. The PL lifetime monotonously increases from 223 μs to 251 μs with increasing annealing temperature. The origin of white PL is ascribed to oxygen vacancies formed during the annealing process. -- Highlights: • The optical properties of zeolites annealed at various temperatures are investigated. • The annealed zeolites exhibit strong white photoluminescence. • The maximum PL enhancement reaches as large as 62 times. • The lifetime shows little dependence on annealing temperature. • The origin of white emission is ascribed to the oxygen vacancies.

  15. An Efficient and Flexible Deterministic Framework for Multithreaded Programs

    Institute of Scientific and Technical Information of China (English)

    卢凯; 周旭; 王小平; 陈沉

    2015-01-01

    Determinism is very useful to multithreaded programs in debugging, testing, etc. Many deterministic ap-proaches have been proposed, such as deterministic multithreading (DMT) and deterministic replay. However, these sys-tems either are inefficient or target a single purpose, which is not flexible. In this paper, we propose an efficient and flexible deterministic framework for multithreaded programs. Our framework implements determinism in two steps: relaxed determinism and strong determinism. Relaxed determinism solves data races efficiently by using a proper weak memory consistency model. After that, we implement strong determinism by solving lock contentions deterministically. Since we can apply different approaches for these two steps independently, our framework provides a spectrum of deterministic choices, including nondeterministic system (fast), weak deterministic system (fast and conditionally deterministic), DMT system, and deterministic replay system. Our evaluation shows that the DMT configuration of this framework could even outperform a state-of-the-art DMT system.

  16. Loviisa Unit One: Annealing - healing

    Energy Technology Data Exchange (ETDEWEB)

    Kohopaeae, J.; Virsu, R. [ed.; Henriksson, A. [ed.

    1997-11-01

    Unit 1 of the Loviisa nuclear powerplant was annealed in connection with the refuelling outage in the summer of 1996. This type of heat treatment restored the toughness properties of the pressure vessel weld, which had been embrittled be neutron radiation, so that it is almost equivalent to a new weld. The treatment itself was an ordinary metallurgical procedure that took only a few days. But the material studies that preceded it began over fifteen years ago and have put IVO at the forefront of world-wide expertise in the area of radiation embrittlement

  17. Enhanced deterministic phase retrieval using a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa;

    2012-01-01

    A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle in...

  18. Deterministic combination of numerical and physical coastal wave models

    DEFF Research Database (Denmark)

    Zhang, H.W.; Schäffer, Hemming Andreas; Jakobsen, K.P.

    2007-01-01

    A deterministic combination of numerical and physical models for coastal waves is developed. In the combined model, a Boussinesq model MIKE 21 BW is applied for the numerical wave computations. A piston-type 2D or 3D wavemaker and the associated control system with active wave absorption provides...

  19. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  20. Reasoning against a deterministic conception of the world

    NARCIS (Netherlands)

    Huppes-Cluysenaer, L.

    2013-01-01

    Aristotle situates freedom in nature and slavery in reason. His concept of freedom is inherently connected with the indeterminist belief in a double impulse of the body. The deterministic conception of nature - introduced during Enlightenment - has brought a reversal of this relation: nature is

  1. Line and lattice networks under deterministic interference models

    NARCIS (Netherlands)

    Goseling, Jasper; Gastpar, Michael; Weber, Jos H.

    2011-01-01

    Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of re

  2. Deterministic event-based simulation of quantum phenomena

    NARCIS (Netherlands)

    De Raedt, K; De Raedt, H; Michielsen, K

    2005-01-01

    We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the inform

  3. Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations

    NARCIS (Netherlands)

    de Freitas, N.; Smola, A.J.; Zoghi, M.; Langford, J.; Pineau, J.

    2012-01-01

    This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al, 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, Srinivas

  4. Limiting Shapes for Deterministic Centrally Seeded Growth Models

    NARCIS (Netherlands)

    Fey-den Boer, Anne; Redig, Frank

    2007-01-01

    We study the rotor router model and two deterministic sandpile models. For the rotor router model in ℤ d , Levine and Peres proved that the limiting shape of the growth cluster is a sphere. For the other two models, only bounds in dimension 2 are known. A unified approach for these models with a

  5. Deterministic or stochastic choices in retinal neuron specification

    OpenAIRE

    Chen, Zhenqing; LI Xin; DESPLAN, CLAUDE

    2012-01-01

    There are two views on vertebrate retinogenesis: a deterministic model dependent on fixed lineages, and a stochastic model in which choices of division modes and cell fates cannot be predicted. In this issue of Neuron, He et al. (2012) address this question in zebra fish using live imaging and mathematical modeling.

  6. Limiting Shapes for Deterministic Centrally Seeded Growth Models

    NARCIS (Netherlands)

    Fey-den Boer, Anne; Redig, Frank

    2007-01-01

    We study the rotor router model and two deterministic sandpile models. For the rotor router model in ℤ d , Levine and Peres proved that the limiting shape of the growth cluster is a sphere. For the other two models, only bounds in dimension 2 are known. A unified approach for these models with a

  7. Multidirectional sorting modes in deterministic lateral displacement devices

    DEFF Research Database (Denmark)

    Long, B.R.; Heller, Martin; Beech, J.P.

    2008-01-01

    Deterministic lateral displacement (DLD) devices separate micrometer-scale particles in solution based on their size using a laminar microfluidic flow in an array of obstacles. We investigate array geometries with rational row-shift fractions in DLD devices by use of a simple model including both...

  8. Deterministic control of ferroelastic switching in multiferroic materials

    NARCIS (Netherlands)

    Balke, N.; Choudhury, S.; Jesse, S.; Huijben, M.; Chu, Y.-H.; Baddorf, A.P.; Chen, L.Q.; Ramesh, R.; Kalinin, S.V.

    2009-01-01

    Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate

  9. Space-Bounded Complexity Classes and Iterated Deterministic Substitution

    NARCIS (Netherlands)

    Asveld, P.R.J.

    1979-01-01

    We investigate the effect on the space complexity when a language family $K$ is extended by means of iterated $\\lambda$-free deterministic substitution to a family $\\eta(K)$. If each language in $K$ is accepted by a one-way nondeterministic multi-tape Turing machine within space $S(n)$ for some mono

  10. Space-Bounded Complexity Classes and Iterated Deterministic Substitution

    NARCIS (Netherlands)

    Asveld, P.R.J.

    1980-01-01

    We investigate the effect on the space complexity when a language family $K$ is extended by means of $\\lambda$-free deterministic substitution to the family $\\eta(K)$. If each language in $K$ is accepted by a one-way nondeterministic multi-tape Turing-machine within space $S(n)$ for some monotonic s

  11. Controllability of deterministic networks with the identical degree sequence.

    Directory of Open Access Journals (Sweden)

    Xiujuan Ma

    Full Text Available Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y-flower. We analysed controllability of the two deterministic networks ((1, 3-flower and (2, 2-flower by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y-flower networks. Our results show that the family of (x,y-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected.

  12. Controllability of deterministic networks with the identical degree sequence.

    Science.gov (United States)

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected.

  13. Calculating Certified Compilers for Non-deterministic Languages

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2015-01-01

    Reasoning about programming languages with non-deterministic semantics entails many difficulties. For instance, to prove correctness of a compiler for such a language, one typically has to split the correctness property into a soundness and a completeness part, and then prove these two parts...

  14. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  15. Deterministic retrieval of complex Green's functions using hard X rays.

    Science.gov (United States)

    Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E

    2009-01-30

    A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory.

  16. A Unit on Deterministic Chaos for Student Teachers

    Science.gov (United States)

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  17. Scheme for deterministic Bell-state-measurement-free quantum teleportation

    CERN Document Server

    Yang, M; Yang, Ming; Cao, Zhuo-Liang

    2004-01-01

    A deterministic teleportation scheme for unknown atomic states is proposed in cavity QED. The Bell state measurement is not needed in the teleportation process, and the success probability can reach 1.0. In addition, the current scheme is insensitive to the cavity decay and thermal field.

  18. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we p

  19. Empirical and deterministic accuracies of across-population genomic prediction

    NARCIS (Netherlands)

    Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.

    2015-01-01

    Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which

  20. Control rod worth calculations using deterministic and stochastic methods

    Energy Technology Data Exchange (ETDEWEB)

    Varvayanni, M. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: melina@ipta.demokritos.g [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece)

    2009-11-15

    Knowledge of the efficiency of a control rod to absorb excess reactivity in a nuclear reactor, i.e. knowledge of its reactivity worth, is very important from many points of view. These include the analysis and the assessment of the shutdown margin of new core configurations (upgrade, conversion, refuelling, etc.) as well as several operational needs, such as calibration of the control rods, e.g. in case that reactivity insertion experiments are planned. The control rod worth can be assessed either experimentally or theoretically, mainly through the utilization of neutronic codes. In the present work two different theoretical approaches, i.e. a deterministic and a stochastic one are used for the estimation of the integral and the differential worth of two control rods utilized in the Greek Research Reactor (GRR-1). For the deterministic approach the neutronics code system SCALE (modules NITAWL/XSDRNPM) and CITATION is used, while the stochastic one is made using the Monte Carlo code TRIPOLI. Both approaches follow the procedure of reactivity insertion steps and their results are tested against measurements conducted in the reactor. The goal of this work is to examine the capability of a deterministic code system to reliably simulate the worth of a control rod, based also on comparisons with the detailed Monte Carlo simulation, while various options are tested with respect to the deterministic results' reliability.

  1. Deterministic Entanglement via Molecular Dissociation in Integrated Atom Optics

    OpenAIRE

    Zhao, Bo; Chen, Zeng-Bing; Pan, Jian-Wei; Schmiedmayer, J.; Recati, Alessio; Astrakharchik, Grigory E.; Calarco, Tommaso

    2005-01-01

    Deterministic entanglement of neutral cold atoms can be achieved by combining several already available techniques like the creation/dissociation of neutral diatomic molecules, manipulating atoms with micro fabricated structures (atom chips) and detecting single atoms with almost 100% efficiency. Manipulating this entanglement with integrated/linear atom optics will open a new perspective for quantum information processing with neutral atoms.

  2. Deterministic teleportation using single-photon entanglement as a resource

    DEFF Research Database (Denmark)

    Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.

    2012-01-01

    We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...

  3. Demonstration of deterministic and high fidelity squeezing of quantum information

    DEFF Research Database (Denmark)

    Yoshikawa, J-I.; Hayashi, T-; Akiyama, T.

    2007-01-01

    By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an...

  4. Linear-Time Recognizable Classes of Tree Languages by Deterministic Linear Pushdown Tree Automata

    Science.gov (United States)

    Fujiyoshi, Akio

    In this paper, we study deterministic linear pushdown tree automata (deterministic L-PDTAs) and some variations. Since recognition of an input tree by a deterministic L-PDTA can be done in linear time, deterministic L-PDTAs are applicable to many kinds of applications. A strict hierarchy will be shown among the classes of tree languages defined by a variety of deterministic L-PDTAs. It will be also shown that deterministic L-PDTAs are weakly equivalent to nondeterministic L-PDTAs.

  5. Keystream Generator Based On Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Ayad A. Abdulsalam

    2011-01-01

    Full Text Available Advances in the design of keystream generator using heuristic techniques are reported. A simulated annealing algorithm for generating random keystream with large complexity is presented. Simulated annealing technique is adapted to locate these requirements. The definitions for some cryptographic properties are generalized, providing a measure suitable for use as an objective function in a simulated annealing algorithm, seeking randomness that satisfy both correlation immunity and the large linear complexity. Results are presented demonstrating the effectiveness of the method.

  6. Constructing stochastic models from deterministic process equations by propensity adjustment

    Directory of Open Access Journals (Sweden)

    Wu Jialiang

    2011-11-01

    Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic

  7. Deterministic quantum teleportation with feed-forward in a solid state system.

    Science.gov (United States)

    Steffen, L; Salathe, Y; Oppliger, M; Kurpiers, P; Baur, M; Lang, C; Eichler, C; Puebla-Hellmann, G; Fedorov, A; Wallraff, A

    2013-08-15

    Engineered macroscopic quantum systems based on superconducting electronic circuits are attractive for experimentally exploring diverse questions in quantum information science. At the current state of the art, quantum bits (qubits) are fabricated, initialized, controlled, read out and coupled to each other in simple circuits. This enables the realization of basic logic gates, the creation of complex entangled states and the demonstration of algorithms or error correction. Using different variants of low-noise parametric amplifiers, dispersive quantum non-demolition single-shot readout of single-qubit states with high fidelity has enabled continuous and discrete feedback control of single qubits. Here we realize full deterministic quantum teleportation with feed-forward in a chip-based superconducting circuit architecture. We use a set of two parametric amplifiers for both joint two-qubit and individual qubit single-shot readout, combined with flexible real-time digital electronics. Our device uses a crossed quantum bus technology that allows us to create complex networks with arbitrary connecting topology in a planar architecture. The deterministic teleportation process succeeds with order unit probability for any input state, as we prepare maximally entangled two-qubit states as a resource and distinguish all Bell states in a single two-qubit measurement with high efficiency and high fidelity. We teleport quantum states between two macroscopic systems separated by 6 mm at a rate of 10(4) s(-1), exceeding other reported implementations. The low transmission loss of superconducting waveguides is likely to enable the range of this and other schemes to be extended to significantly larger distances, enabling tests of non-locality and the realization of elements for quantum communication at microwave frequencies. The demonstrated feed-forward may also find application in error correction schemes.

  8. DOE`s annealing prototype demonstration projects

    Energy Technology Data Exchange (ETDEWEB)

    Warren, J.; Nakos, J.; Rochau, G.

    1997-02-01

    One of the challenges U.S. utilities face in addressing technical issues associated with the aging of nuclear power plants is the long-term effect of plant operation on reactor pressure vessels (RPVs). As a nuclear plant operates, its RPV is exposed to neutrons. For certain plants, this neutron exposure can cause embrittlement of some of the RPV welds which can shorten the useful life of the RPV. This RPV embrittlement issue has the potential to affect the continued operation of a number of operating U.S. pressurized water reactor (PWR) plants. However, RPV material properties affected by long-term irradiation are recoverable through a thermal annealing treatment of the RPV. Although a dozen Russian-designed RPVs and several U.S. military vessels have been successfully annealed, U.S. utilities have stated that a successful annealing demonstration of a U.S. RPV is a prerequisite for annealing a licensed U.S. nuclear power plant. In May 1995, the Department of Energy`s Sandia National Laboratories awarded two cost-shared contracts to evaluate the feasibility of annealing U.S. licensed plants by conducting an anneal of an installed RPV using two different heating technologies. The contracts were awarded to the American Society of Mechanical Engineers (ASME) Center for Research and Technology Development (CRTD) and MPR Associates (MPR). The ASME team completed its annealing prototype demonstration in July 1996, using an indirect gas furnace at the uncompleted Public Service of Indiana`s Marble Hill nuclear power plant. The MPR team`s annealing prototype demonstration was scheduled to be completed in early 1997, using a direct heat electrical furnace at the uncompleted Consumers Power Company`s nuclear power plant at Midland, Michigan. This paper describes the Department`s annealing prototype demonstration goals and objectives; the tasks, deliverables, and results to date for each annealing prototype demonstration; and the remaining annealing technology challenges.

  9. Modernizing quantum annealing using local searches

    Science.gov (United States)

    Chancellor, Nicholas

    2017-02-01

    I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques.

  10. Understanding the microwave annealing of silicon

    Directory of Open Access Journals (Sweden)

    Chaochao Fu

    2017-03-01

    Full Text Available Though microwave annealing appears to be very appealing due to its unique features, lacking an in-depth understanding and accurate model hinder its application in semiconductor processing. In this paper, the physics-based model and accurate calculation for the microwave annealing of silicon are presented. Both thermal effects, including ohmic conduction loss and dielectric polarization loss, and non-thermal effects are thoroughly analyzed. We designed unique experiments to verify the mechanism and extract relevant parameters. We also explicitly illustrate the dynamic interaction processes of the microwave annealing of silicon. This work provides an in-depth understanding that can expedite the application of microwave annealing in semiconductor processing and open the door to implementing microwave annealing for future research and applications.

  11. Deterministic and Probabilistic Approach in Primality Checking for RSA Algorithm

    Directory of Open Access Journals (Sweden)

    Sanjoy Das

    2013-04-01

    Full Text Available The RSA cryptosystem, invented by Ron Rivest, Adi Shamir and Len Adleman was first publicized in the August 1977 issue of Scientific American [1]. The security level of this algorithm very much depends on two large prime numbers [2]. In this paper two distinct approaches have been dealt with for primality checking. These are deterministic approach and probabilistic approach. For the deterministic approach, it has chosen modified trial division and for probabilistic approach, Miller-Rabin algorithm is considered. The different kinds of attacks on RSA and their remedy are also being discussed. This includes the chosen cipher text attacks, short private key exponent attack and frequency attack. Apart from these attacks, discussion has been made on how to choose the primes for the RSA algorithm. The time complexity has been demonstrated for the various algorithms implemented and compared with others. Finally the future modifications and expectations arising out of the current limitations have also been stated at the end.

  12. Deterministic chaos at the ocean surface: applications and interpretations

    Directory of Open Access Journals (Sweden)

    A. J. Palmer

    1998-01-01

    Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.

  13. Deterministic chaos, fractals and quantumlike mechanics in atmospheric flows

    CERN Document Server

    Selvam, A M

    1990-01-01

    The complex spaciotemporal patterns of atmospheric flows that result from the cooperative existence of fluctuations ranging in size from millimetres to thousands of kilometres are found to exhibit long-range spacial and temporal correlations. These correlations are manifested as the self-similar fractal geometry of the global cloud cover pattern and the inverse power-law form for the atmospheric eddy energy spectrum. Such long-range spaciotemporal correlations are ubiquitous in extended natural dynamical systems and are signatures of deterministic chaos or self-organized criticality. In this paper, a cell dynamical system model for atmospheric flows is developed by consideration of microscopic domain eddy dynamical processes. This nondeterministic model enables formulation of a simple closed set of governing equations for the prediction and description of observed atmospheric flow structure characteristics as follows. The strange-attractor design of the field of deterministic chaos in atmospheric flows consis...

  14. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    Science.gov (United States)

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  15. The road to deterministic matrices with the restricted isometry property

    CERN Document Server

    Bandeira, Afonso S; Mixon, Dustin G; Wong, Percy

    2012-01-01

    The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are R...

  16. On the secure obfuscation of deterministic finite automata.

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, William Erik

    2008-06-01

    In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.

  17. Evaluation of Deterministic and Stochastic Components of Traffic Counts

    Directory of Open Access Journals (Sweden)

    Ivan Bošnjak

    2012-10-01

    Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.

  18. Deterministic and Stochastic Study of Wind Farm Harmonic Currents

    DEFF Research Database (Denmark)

    Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus;

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic...... characterization of wind farm harmonic currents is analyzed. Specific issues addressed in the paper include the harmonic variation with the wind farm operating point and the random characteristics of their magnitude and phase angle....

  19. An Architecture of Deterministic Quantum Central Processing Unit

    OpenAIRE

    Xue, Fei; Chen, Zeng-Bing; Shi, Mingjun; Zhou, Xianyi; Du, Jiangfeng; Han, Rongdian

    2002-01-01

    We present an architecture of QCPU(Quantum Central Processing Unit), based on the discrete quantum gate set, that can be programmed to approximate any n-qubit computation in a deterministic fashion. It can be built efficiently to implement computations with any required accuracy. QCPU makes it possible to implement universal quantum computation with a fixed, general purpose hardware. Thus the complexity of the quantum computation can be put into the software rather than the hardware.

  20. Deterministic teleportation using single-photon entanglement as a resource

    CERN Document Server

    Björk, Gunnar; Andersen, Ulrik L

    2011-01-01

    We outline a proof that teleportation with a single particle is in principle just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell state analyzer is proposed which uses only classical resources, namely coherent states, a Kerr non-linearity, and a two-level atom.

  1. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    Science.gov (United States)

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  2. Testing for deterministic monetary chaos: Metric and topological diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Barkoulas, John T. [Department of Finance and Quantitative Analysis, Georgia Southern University, Statesboro, GA 30460 (United States)], E-mail: jbarkoul@georgiasouthern.edu

    2008-11-15

    The evidence of deterministic chaos in monetary aggregates tends to be contradictory in the literature. We revisit the issue of monetary chaos by applying tools based on both the metric (correlation dimension and Lyapunov exponents) and topological (recurrence plots) approaches to chaos. For simple-sum and divisia monetary aggregates over an expanded sample period, the empirical evidence from both approaches is negative for monetary chaotic dynamics.

  3. Deterministically – Probabilistic Approach for Determining the Steels Elasticity Modules

    Directory of Open Access Journals (Sweden)

    Popov Alexander

    2015-03-01

    Full Text Available The known deterministic relationships to estimate the elastic characteristics of materials are not well accounted for significant variability of these parameters in solids. Therefore, it is given a probabilistic approach to determine the modules of elasticity, adopted to random values, which increases the accuracy of the obtained results. By an ultrasonic testing, a non-destructive evaluation of the investigated steels structure and properties has been made.

  4. Uniform Deterministic Discrete Method for Three Dimensional Systems

    Institute of Scientific and Technical Information of China (English)

    1997-01-01

    For radiative direct exchange areas in three dimensional system,the Uniform Deterministic Discrete Method(UDDM) was adopted.The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs.The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numercal accuracy.

  5. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  6. Deterministic chaos control in neural networks on various topologies

    Science.gov (United States)

    Neto, A. J. F.; Lima, F. W. S.

    2017-01-01

    Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.

  7. Deterministic Identity Testing of Read-Once Algebraic Branching Programs

    CERN Document Server

    Jansen, Maurice; Sarma, Jayalal

    2009-01-01

    In this paper we study polynomial identity testing of sums of $k$ read-once algebraic branching programs ($\\Sigma_k$-RO-ABPs), generalizing the work in (Shpilka and Volkovich 2008,2009), who considered sums of $k$ read-once formulas ($\\Sigma_k$-RO-formulas). We show that $\\Sigma_k$-RO-ABPs are strictly more powerful than $\\Sigma_k$-RO-formulas, for any $k \\leq \\lfloor n/2\\rfloor$, where $n$ is the number of variables. We obtain the following results: 1) Given free access to the RO-ABPs in the sum, we get a deterministic algorithm that runs in time $O(k^2n^7s) + n^{O(k)}$, where $s$ bounds the size of any largest RO-ABP given on the input. This implies we have a deterministic polynomial time algorithm for testing whether the sum of a constant number of RO-ABPs computes the zero polynomial. 2) Given black-box access to the RO-ABPs computing the individual polynomials in the sum, we get a deterministic algorithm that runs in time $k^2n^{O(\\log n)} + n^{O(k)}$. 3) Finally, given only black-box access to the polyn...

  8. How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics?

    Science.gov (United States)

    Solé, A.; Oriols, X.; Marian, D.; Zanghì, N.

    2016-10-01

    Bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function. In this theory, the state of a system of particles is defined by the actual positions of the particles and the wave function of the system; and the state of the system evolves deterministically. Thus, the Bohmian state can be compared with the state in classical mechanics, which is given by the positions and momenta of all the particles, and which also evolves deterministically. However, while in classical mechanics it is usually taken for granted and considered unproblematic that the state is, at least in principle, measurable, this is not the case in Bohmian mechanics. Due to the linearity of the quantum dynamical laws, one essential component of the Bohmian state, the wave function, is not directly measurable. Moreover, it turns out that the measurement of the other component of the state — the positions of the particles — must be mediated by the wave function; a fact that in turn implies that the positions of the particles, though measurable, are constrained by absolute uncertainty. This is the key to understanding how Bohmian mechanics, despite being deterministic, can account for all quantum predictions, including quantum randomness and uncertainty.

  9. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    Science.gov (United States)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-08-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: - Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. - Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. - Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  10. Universal quantification for deterministic chaos in dynamical systems

    CERN Document Server

    Selvam, A M

    1993-01-01

    A cell dynamical system model for deterministic chaos enables precise quantification of the round-off error growth,i.e., deterministic chaos in digital computer realizations of mathematical models of continuum dynamical systems. The model predicts the following: (a) The phase space trajectory (strange attractor) when resolved as a function of the computer accuracy has intrinsic logarithmic spiral curvature with the quasiperiodic Penrose tiling pattern for the internal structure. (b) The universal constant for deterministic chaos is identified as the steady-state fractional round-off error k for each computational step and is equal to 1 /sqr(tau) (=0.382) where tau is the golden mean. (c) The Feigenbaum's universal constants a and d are functions of k and, further, the expression 2(a**2) = (pie)*d quantifies the steady-state ordered emergence of the fractal geometry of the strange attractor. (d) The power spectra of chaotic dynamical systems follow the universal and unique inverse power law form of the statist...

  11. Deterministic gathering of anonymous agents in arbitrary networks

    CERN Document Server

    Dieudonné, Yoann

    2011-01-01

    A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node. Agents are anonymous (identical), execute the same deterministic algorithm and move in synchronous rounds along links of the network. Which configurations are gatherable and how to gather all of them deterministically by the same algorithm? We give a complete solution of this gathering problem in arbitrary networks. We characterize all gatherable configurations and give two universal deterministic gathering algorithms, i.e., algorithms that gather all gatherable configurations. The first algorithm works under the assumption that an upper bound n on the size of the network is known. In this case our algorithm guarantees gathering with detection, i.e., the existence of a round for any gatherable configuration, such that all agents are at the same node and all declare that gathering is accomplished. If no upper bound on the size of the network i...

  12. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  13. Non-equilibrium Thermodynamics of Piecewise Deterministic Markov Processes

    Science.gov (United States)

    Faggionato, A.; Gabrielli, D.; Ribezzi Crivellari, M.

    2009-10-01

    We consider a class of stochastic dynamical systems, called piecewise deterministic Markov processes, with states ( x, σ)∈Ω×Γ, Ω being a region in ℝ d or the d-dimensional torus, Γ being a finite set. The continuous variable x follows a piecewise deterministic dynamics, the discrete variable σ evolves by a stochastic jump dynamics and the two resulting evolutions are fully-coupled. We study stationarity, reversibility and time-reversal symmetries of the process. Increasing the frequency of the σ-jumps, the system behaves asymptotically as deterministic and we investigate the structure of its fluctuations (i.e. deviations from the asymptotic behavior), recovering in a non Markovian frame results obtained by Bertini et al. (Phys. Rev. Lett. 87(4):040601, 2001; J. Stat. Phys. 107(3-4):635-675, 2002; J. Stat. Mech. P07014, 2007; Preprint available online at http://www.arxiv.org/abs/0807.4457, 2008), in the context of Markovian stochastic interacting particle systems. Finally, we discuss a Gallavotti-Cohen-type symmetry relation with involution map different from time-reversal.

  14. Histone Variants and Epigenetics

    Science.gov (United States)

    Henikoff, Steven; Smith, M. Mitchell

    2015-01-01

    Histones package and compact DNA by assembling into nucleosome core particles. Most histones are synthesized at S phase for rapid deposition behind replication forks. In addition, the replacement of histones deposited during S phase by variants that can be deposited independently of replication provide the most fundamental level of chromatin differentiation. Alternative mechanisms for depositing different variants can potentially establish and maintain epigenetic states. Variants have also evolved crucial roles in chromosome segregation, transcriptional regulation, DNA repair, and other processes. Investigations into the evolution, structure, and metabolism of histone variants provide a foundation for understanding the participation of chromatin in important cellular processes and in epigenetic memory. PMID:25561719

  15. Cylinder packing by simulated annealing

    Directory of Open Access Journals (Sweden)

    M. Helena Correia

    2000-12-01

    Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.

  16. Precision Laser Annealing of Focal Plane Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeRose, Christopher [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Starbuck, Andrew Lea [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verley, Jason C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jenkins, Mark W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    We present results from laser annealing experiments in Si using a passively Q-switched Nd:YAG microlaser. Exposure with laser at fluence values above the damage threshold of commercially available photodiodes results in electrical damage (as measured by an increase in photodiode dark current). We show that increasing the laser fluence to values in excess of the damage threshold can result in annealing of a damage site and a reduction in detector dark current by as much as 100x in some cases. A still further increase in fluence results in irreparable damage. Thus we demonstrate the presence of a laser annealing window over which performance of damaged detectors can be at least partially reconstituted. Moreover dark current reduction is observed over the entire operating range of the diode indicating that device performance has been improved for all values of reverse bias voltage. Additionally, we will present results of laser annealing in Si waveguides. By exposing a small (<10 um) length of a Si waveguide to an annealing laser pulse, the longitudinal phase of light acquired in propagating through the waveguide can be modified with high precision, <15 milliradian per laser pulse. Phase tuning by 180 degrees is exhibited with multiple exposures to one arm of a Mach-Zehnder interferometer at fluence values below the morphological damage threshold of an etched Si waveguide. No reduction in optical transmission at 1550 nm was found after 220 annealing laser shots. Modeling results for laser annealing in Si are also presented.

  17. Probabilistic one-player Ramsey games via deterministic two-player games

    CERN Document Server

    Belfrage, Michael; Spöhel, Reto

    2009-01-01

    Consider the following probabilistic one-player game: The board is a graph with $n$ vertices, which initially contains no edges. In each step, a new edge is drawn uniformly at random from all non-edges and is presented to the player, henceforth called Painter. Painter must assign one of $r$ available colors to each edge immediately, where $r \\geq 2$ is a fixed integer. The game is over as soon as a monochromatic copy of some fixed graph $F$ has been created, and Painter's goal is to 'survive' for as many steps as possible before this happens. We present a new technique for deriving upper bounds on the threshold of this game, i.e., on the typical number of steps Painter will survive with an optimal strategy. More specifically, we consider a deterministic two-player variant of the game where the edges are not chosen randomly, but by a second player Builder. However, Builder has to adhere to the restriction that, for some real number $d$, the ratio of edges to vertices in all subgraphs of the evolving board neve...

  18. Global warming: Temperature estimation in annealers

    Directory of Open Access Journals (Sweden)

    Jack Raymond

    2016-11-01

    Full Text Available Sampling from a Boltzmann distribution is NP-hard and so requires heuristic approaches. Quantum annealing is one promising candidate. The failure of annealing dynamics to equilibrate on practical time scales is a well understood limitation, but does not always prevent a heuristically useful distribution from being generated. In this paper we evaluate several methods for determining a useful operational temperature range for annealers. We show that, even where distributions deviate from the Boltzmann distribution due to ergodicity breaking, these estimates can be useful. We introduce the concepts of local and global temperatures that are captured by different estimation methods. We argue that for practical application it often makes sense to analyze annealers that are subject to post-processing in order to isolate the macroscopic distribution deviations that are a practical barrier to their application.

  19. A haplotype inference algorithm for trios based on deterministic sampling

    Directory of Open Access Journals (Sweden)

    Iliadis Alexandros

    2010-08-01

    Full Text Available Abstract Background In genome-wide association studies, thousands of individuals are genotyped in hundreds of thousands of single nucleotide polymorphisms (SNPs. Statistical power can be increased when haplotypes, rather than three-valued genotypes, are used in analysis, so the problem of haplotype phase inference (phasing is particularly relevant. Several phasing algorithms have been developed for data from unrelated individuals, based on different models, some of which have been extended to father-mother-child "trio" data. Results We introduce a technique for phasing trio datasets using a tree-based deterministic sampling scheme. We have compared our method with publicly available algorithms PHASE v2.1, BEAGLE v3.0.2 and 2SNP v1.7 on datasets of varying number of markers and trios. We have found that the computational complexity of PHASE makes it prohibitive for routine use; on the other hand 2SNP, though the fastest method for small datasets, was significantly inaccurate. We have shown that our method outperforms BEAGLE in terms of speed and accuracy for small to intermediate dataset sizes in terms of number of trios for all marker sizes examined. Our method is implemented in the "Tree-Based Deterministic Sampling" (TDS package, available for download at http://www.ee.columbia.edu/~anastas/tds Conclusions Using a Tree-Based Deterministic sampling technique, we present an intuitive and conceptually simple phasing algorithm for trio data. The trade off between speed and accuracy achieved by our algorithm makes it a strong candidate for routine use on trio datasets.

  20. Quantum Adiabatic Evolution Algorithms versus Simulated Annealing

    CERN Document Server

    Farhi, E; Gutmann, S; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam

    2002-01-01

    We explain why quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time.

  1. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    Science.gov (United States)

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  2. Deterministic generation of entangled coherent states for two atomic samples

    Institute of Scientific and Technical Information of China (English)

    Lu Dao-Ming; Zheng Shi-Biao

    2009-01-01

    This paper proposes an efficient scheme for deterministic generation of entangled coherent states for two atomic samples. In the scheme two collections of atoms are trapped in an optical cavity and driven by a classical field. Under certain conditions the two atomic samples evolve from an coherent state to an entangled coherent state. During the interaction the cavity mode is always in the vacuum state and the atoms have no probability of being populated in the excited state. Thus, the scheme is insensitive to both the cavity decay and atomic spontaneous emission.

  3. Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.

    Science.gov (United States)

    Chew, Lock Yue

    2012-01-01

    We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.

  4. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  5. Two-particle correlations via quasi-deterministic analyzer model

    CERN Document Server

    Dalton, B J

    2001-01-01

    We introduce a quasi-deterministic eigenstate transition model of analyzers in which the final eigenstate is selected by initial conditions. We combine this analyzer model with causal spin coupling to calculate both proton-proton and photon-photon correlations, one particle pair at a time. The calculated correlations exceed the Bell limits and show excellent agreement with the measured correlations of [M. Lamehi-Rachti and W. Mittig, Phys. Rev. D 14 (10), 2543 (1976)] and [ A. Aspect, P. Grangier and G. Rogers, Phys. Rev. Lett. 49 91 (1982)] respectively. We discuss why this model exceeds the Bell type limits.

  6. Lasing in an optimized deterministic aperiodic nanobeam cavity

    Science.gov (United States)

    Moon, Seul-Ki; Jeong, Kwang-Yong; Noh, Heeso; Yang, Jin-Kyu

    2016-12-01

    We have demonstrated lasing action from partially extended modes in deterministic aperiodic nanobeam cavities inflated by Rudin-Shapiro sequence with two different air holes at room temperature. By varying the size ratio of the holes and hence the structural aperiodicity, different optical lasing modes were obtained with maximized quality factors. The lasing characteristics of the partially extended modes were confirmed by numerical simulations based on scanning microscope images of the fabricated samples. We believe that this partially extended nanobeam modes will be useful for label-free optical biosensors.

  7. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  8. Deterministic ants in labirynth -- information gained by map sharing

    CERN Document Server

    Malinowski, Janusz

    2014-01-01

    A few of ant robots are dropped to a labirynth, formed by a square lattice with a small number of nodes removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which she has visited. Once two ants met, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.

  9. Unambiguous Tree Languages Are Topologically Harder Than Deterministic Ones

    Directory of Open Access Journals (Sweden)

    Szczepan Hummel

    2012-10-01

    Full Text Available The paper gives an example of a tree language G that is recognised by an unambiguous parity automaton and is analytic-complete as a set in Cantor space. This already shows that the unambiguous languages are topologically more complex than the deterministic ones, that are all coanalytic. Using set G as a building block we construct an unambiguous language that is topologically harder than any countable boolean combination of analytic and coanalytic sets. In particular the language is harder than any set in difference hierarchy of analytic sets considered by O.Finkel and P.Simonnet in the context of nondeterministic automata.

  10. Fully fault tolerant quantum computation with non-deterministic gates

    CERN Document Server

    Li, Ying; Stace, Thomas M; Benjamin, Simon C

    2010-01-01

    In certain approaches to quantum computing the operations between qubits are non-deterministic and likely to fail. For example, a distributed quantum processor would achieve scalability by networking together many small components; operations between components should assumed to be failure prone. In the logical limit of this architecture each component contains only one qubit. Here we derive thresholds for fault tolerant quantum computation under such extreme paradigms. We find that computation is supported for remarkably high failure rates (exceeding 90%) providing that failures are heralded, meanwhile the rate of unknown errors should not exceed 2 in 10^4 operations.

  11. A Deterministic Transport Code for Space Environment Electrons

    Science.gov (United States)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  12. Noise-based deterministic logic and computing: a brief survey

    CERN Document Server

    Kish, Laszlo B; Bezrukov, Sergey M; Peper, Ferdinand; Gingl, Zoltan; Horvath, Tamas

    2010-01-01

    A short survey is provided about our recent explorations of the young topic of noise-based logic. After outlining the motivation behind noise-based computation schemes, we present a short summary of our ongoing efforts in the introduction, development and design of several noise-based deterministic multivalued logic schemes and elements. In particular, we describe classical, instantaneous, continuum, spike and random-telegraph-signal based schemes with applications such as circuits that emulate the brain's functioning and string verification via a slow communication channel.

  13. Deterministic secure quantum communication over a collective-noise channel

    Institute of Scientific and Technical Information of China (English)

    GU Bin; PEI ShiXin; SONG Biao; ZHONG Kun

    2009-01-01

    We present two deterministic secure quantum communication schemes over a collective-noise. One is used to complete the secure quantum communication against a collective-rotation noise and the other is used against a collective-dephasing noise. The two parties of quantum communication can exploit the correlation of their subsystems to check eavesdropping efficiently. Although the sender should prepare a sequence of three-photon entangled states for accomplishing secure communication against a collective noise, the two parties need only single-photon measurements, rather than Bell-state measurements, which will make our schemes convenient in practical application.

  14. Deterministic Single-Phonon Source Triggered by a Single Photon

    CERN Document Server

    Söllner, Immo; Lodahl, Peter

    2016-01-01

    We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.

  15. Deterministic Dynamics and Chaos: Epistemology and Interdisciplinary Methodology

    CERN Document Server

    Catsigeras, Eleonora

    2011-01-01

    We analyze, from a theoretical viewpoint, the bidirectional interdisciplinary relation between mathematics and psychology, focused on the mathematical theory of deterministic dynamical systems, and in particular, on the theory of chaos. On one hand, there is the direct classic relation: the application of mathematics to psychology. On the other hand, we propose the converse relation which consists in the formulation of new abstract mathematical problems appearing from processes and structures under research of psychology. The bidirectional multidisciplinary relation from-to pure mathematics, largely holds with the "hard" sciences, typically physics and astronomy. But it is rather new, from the social and human sciences, towards pure mathematics.

  16. Deterministic versus stochastic aspects of superexponential population growth models

    Science.gov (United States)

    Grosjean, Nicolas; Huillet, Thierry

    2016-08-01

    Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.

  17. Deterministic Remote State Preparation via the χ State

    Science.gov (United States)

    Zhang, Pei; Li, Xian; Ma, Song-Ya; Qu, Zhi-Guo

    2017-05-01

    Two deterministic schemes using the χ state as the entangled channel are put forward to realize the remote preparation of arbitrary two- and three-qubit states. To design the schemes, we construct sets of ingenious measurement bases, which have no restrictions on the coefficients of the prepared state. At variance with the existing schemes via the χ state, the success probabilities of the proposed schemes are greatly improved. Supported by the National Natural Science Foundation of China under Grant Nos. 61201253, 61373131, 61572246, Priority Academic Program Development of Jiangsu Higher Education Institutions, and Collaborative Innovation Center of Atmospheric Environment and Equipment Technology

  18. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne Ersbak Bang; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states...... by excitation to optically excited levels followed by cooperative spontaneous emission. Among our examples of applications, we demonstrate how two-photon-entangled states can be prepared and implemented in a protocol for a reference-frame-free quantum key distribution and how one-dimensional as well as higher...

  19. Deterministic entanglement of Rydberg ensembles by engineered dissipation

    DEFF Research Database (Denmark)

    Dasari, Durga; Mølmer, Klaus

    2014-01-01

    We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can...... be driven towards a dark steady state that entangles all atoms. The long-range resonant dipole-dipole interaction between different Rydberg states extends the entanglement beyond the van der Walls interaction range with perspectives for entangling large and distant ensembles....

  20. Steering Multiple Reverse Current into Unidirectional Current in Deterministic Ratchets

    Institute of Scientific and Technical Information of China (English)

    韦笃取; 罗晓曙; 覃英华

    2011-01-01

    Recent investigations have shown that with varying the amplitude of the external force, the deterministic ratchets exhibit multiple current reversals, which are undesirable in certain circumstances. To control the multiple reverse current to unidirectional current, an adaptive control law is presented inspired from the relation between multiple reversaJs current and the chaos-periodic/quasiperiodic transition of the transport velocity. The designed controller can stabilize the transport velocity of ratchets to steady state and suppress any chaos-periodic/quasiperiodic transition, namely, the stable transport in ratchets is achieved, which makes the current sign unchanged.

  1. Deterministic multidimensional growth model for small-world networks

    CERN Document Server

    Peng, Aoyuan

    2011-01-01

    We proposed a deterministic multidimensional growth model for small-world networks. The model can characterize the distinguishing properties of many real-life networks with geometric space structure. Our results show the model possesses small-world effect: larger clustering coefficient and smaller characteristic path length. We also obtain some accurate results for its properties including degree distribution, clustering coefficient and network diameter and discuss them. It is also worth noting that we get an accurate analytical expression for calculating the characteristic path length. We verify numerically and experimentally these main features.

  2. Deterministic homogenization of parabolic monotone operators with time dependent coefficients

    Directory of Open Access Journals (Sweden)

    Gabriel Nguetseng

    2004-06-01

    Full Text Available We study, beyond the classical periodic setting, the homogenization of linear and nonlinear parabolic differential equations associated with monotone operators. The usual periodicity hypothesis is here substituted by an abstract deterministic assumption characterized by a great relaxation of the time behaviour. Our main tool is the recent theory of homogenization structures by the first author, and our homogenization approach falls under the two-scale convergence method. Various concrete examples are worked out with a view to pointing out the wide scope of our approach and bringing the role of homogenization structures to light.

  3. Influence of alloying and secondary annealing on anneal hardening effect at sintered copper alloys

    Indian Academy of Sciences (India)

    Svetlana Nestorovic

    2005-08-01

    This paper reports results of investigation carried out on sintered copper alloys (Cu, 8 at%; Zn, Ni, Al and Cu–Au with 4 at%Au). The alloys were subjected to cold rolling (30, 50 and 70%) and annealed isochronally up to recrystallization temperature. Changes in hardness and electrical conductivity were followed in order to investigate the anneal hardening effect. This effect was observed after secondary annealing also. Au and Al have been found to be more effective in inducing anneal hardening effect.

  4. Comparative study of the performance of quantum annealing and simulated annealing.

    Science.gov (United States)

    Nishimori, Hidetoshi; Tsuda, Junichi; Knysh, Sergey

    2015-01-01

    Relations of simulated annealing and quantum annealing are studied by a mapping from the transition matrix of classical Markovian dynamics of the Ising model to a quantum Hamiltonian and vice versa. It is shown that these two operators, the transition matrix and the Hamiltonian, share the eigenvalue spectrum. Thus, if simulated annealing with slow temperature change does not encounter a difficulty caused by an exponentially long relaxation time at a first-order phase transition, the same is true for the corresponding process of quantum annealing in the adiabatic limit. One of the important differences between the classical-to-quantum mapping and the converse quantum-to-classical mapping is that the Markovian dynamics of a short-range Ising model is mapped to a short-range quantum system, but the converse mapping from a short-range quantum system to a classical one results in long-range interactions. This leads to a difference in efficiencies that simulated annealing can be efficiently simulated by quantum annealing but the converse is not necessarily true. We conclude that quantum annealing is easier to implement and is more flexible than simulated annealing. We also point out that the present mapping can be extended to accommodate explicit time dependence of temperature, which is used to justify the quantum-mechanical analysis of simulated annealing by Somma, Batista, and Ortiz. Additionally, an alternative method to solve the nonequilibrium dynamics of the one-dimensional Ising model is provided through the classical-to-quantum mapping.

  5. Nanocrystalline magnetic materials obtained by flash annealing

    Directory of Open Access Journals (Sweden)

    R.K. Murakami

    1999-04-01

    Full Text Available The aim of the present work was to produce enhanced-remanence nanocrystalline magnetic material by crystallizing amorphous or partially amorphous Pr4.5Fe77B18.5 alloys by the flash annealing process, also known as the dc-Joule heating process, and to determine the optimal conditions for obtaining good magnetic coupling between the magnetic phases present in this material. Ribbons of Pr4.5Fe77B18.5 were produced by melt spinning and then annealed for 10-30 s at temperatures 500 - 640 °C by passing current through the sample to develop the enhanced-remanence nanocrystalline magnetic material. These materials were studied by X-ray diffraction, differential thermal analysis and magnetic measurements. Coercivity increases of up to 15% were systematically observed in relation to furnace-annealed material. Two different samples were carefully examined: (i a sample annealed at 600 °C which showed the highest coercive field Hc and remanence ratio Mr/Ms and (ii a sample annealed at 520 °C which showed phase separation in the second quadrant demagnetization curve. Our results are in agreement with other studies which show that flash annealing improves the magnetic properties of some amorphous ferromagnetic ribbons.

  6. An Application of Simulated Annealing to Scheduling Army Unit Training

    Science.gov (United States)

    1986-10-01

    Simulated annealing operates by analogy to the metalurgy process which strengthens metals through successive heating and cooling. The method is highly...diminishing returns is observed. The simulated annealing heuristic operates by analogy to annealing in physical systems. Annealing in a physical

  7. Strongly Deterministic Population Dynamics in Closed Microbial Communities

    Directory of Open Access Journals (Sweden)

    Zak Frentz

    2015-10-01

    Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.

  8. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-07-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  9. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  10. Deterministic Chaos in the X-ray Sources

    Science.gov (United States)

    Grzedzielski, M.; Sukova, P.; Janiuk, A.

    2015-12-01

    Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.

  11. Deterministic chaos in the X-Ray sources

    CERN Document Server

    Grzedzielski, M; Janiuk, A

    2015-01-01

    Hardly any of the observed black hole accretion disks in X-Ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the Quasi-Periodic Oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binaries vari- ability. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the sur- rogate series. We analyze here the data of two X-Ray binaries - XTE J1550-564, and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leadin...

  12. Deterministic approach to microscopic three-phase traffic theory

    CERN Document Server

    Kerner, B S; Kerner, Boris S.; Klenov, Sergey L.

    2005-01-01

    A deterministic approach to three-phase traffic theory is presented. Two different deterministic microscopic traffic flow models are introduced. In an acceleration time delay model (ATD-model), different time delays in driver acceleration associated with driver behavior in various local driving situations are explicitly incorporated into the model. Vehicle acceleration depends on local traffic situation, i.e., whether a driver is within the free flow, or synchronized flow, or else wide moving jam traffic phase. In a speed adaptation model (SA-model), driver time delays are simulated as a model effect: Rather than driver acceleration, vehicle speed adaptation occurs with different time delays depending on one of the three traffic phases in which the vehicle is in. It is found that the ATD- and SA-models show spatiotemporal congested traffic patterns that are adequate with empirical results. It is shown that in accordance with empirical results in the ATD- and SA-models the onset of congestion in free flow at a...

  13. Quantum secure direct communication and deterministic secure quantum communication

    Institute of Scientific and Technical Information of China (English)

    LONG Gui-lu; DENG Fu-guo; WANG Chuan; LI Xi-han; WEN Kai; WANG Wan-ying

    2007-01-01

    In this review article,we review the recent development of quantum secure direct communication(QSDC)and deterministic secure quantum communication(DSQC) which both are used to transmit secret message,including the criteria for QSDC,some interesting QSDC protocols,the DSQC protocols and QSDC network,etc.The difference between these two branches of quantum Communication is that DSOC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit,and QSDC does not.They are attractivebecause they are deterministic,in particular,the QSDC protocol is fully quantum mechanical.With sophisticated quantum technology in the future,the QSDC may become more and more popular.For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel,a quantum privacy amplification protocol has been proposed.It involves very simple CHC operations and reduces the information leakage to a negligible small level.Moreover,with the one-party quantum error correction,a relation has been established between classical linear codes and quantum one-party codes,hence it is convenient to transfer many good classical error correction codes to the quantum world.The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.

  14. Forecasting project schedule performance using probabilistic and deterministic models

    Directory of Open Access Journals (Sweden)

    S.A. Abdel Azeem

    2014-04-01

    Full Text Available Earned value management (EVM was originally developed for cost management and has not widely been used for forecasting project duration. In addition, EVM based formulas for cost or schedule forecasting are still deterministic and do not provide any information about the range of possible outcomes and the probability of meeting the project objectives. The objective of this paper is to develop three models to forecast the estimated duration at completion. Two of these models are deterministic; earned value (EV and earned schedule (ES models. The third model is a probabilistic model and developed based on Kalman filter algorithm and earned schedule management. Hence, the accuracies of the EV, ES and Kalman Filter Forecasting Model (KFFM through the different project periods will be assessed and compared with the other forecasting methods such as the Critical Path Method (CPM, which makes the time forecast at activity level by revising the actual reporting data for each activity at a certain data date. A case study project is used to validate the results of the three models. Hence, the best model is selected based on the lowest average percentage of error. The results showed that the KFFM developed in this study provides probabilistic prediction bounds of project duration at completion and can be applied through the different project periods with smaller errors than those observed in EV and ES forecasting models.

  15. Deterministic entanglement of superconducting qubits by parity measurement and feedback.

    Science.gov (United States)

    Ristè, D; Dukalski, M; Watson, C A; de Lange, G; Tiggelman, M J; Blanter, Ya M; Lehnert, K W; Schouten, R N; DiCarlo, L

    2013-10-17

    The stochastic evolution of quantum systems during measurement is arguably the most enigmatic feature of quantum mechanics. Measuring a quantum system typically steers it towards a classical state, destroying the coherence of an initial quantum superposition and the entanglement with other quantum systems. Remarkably, the measurement of a shared property between non-interacting quantum systems can generate entanglement, starting from an uncorrelated state. Of special interest in quantum computing is the parity measurement, which projects the state of multiple qubits (quantum bits) to a state with an even or odd number of excited qubits. A parity meter must discern the two qubit-excitation parities with high fidelity while preserving coherence between same-parity states. Despite numerous proposals for atomic, semiconducting and superconducting qubits, realizing a parity meter that creates entanglement for both even and odd measurement results has remained an outstanding challenge. Here we perform a time-resolved, continuous parity measurement of two superconducting qubits using the cavity in a three-dimensional circuit quantum electrodynamics architecture and phase-sensitive parametric amplification. Using postselection, we produce entanglement by parity measurement reaching 88 per cent fidelity to the closest Bell state. Incorporating the parity meter in a feedback-control loop, we transform the entanglement generation from probabilistic to fully deterministic, achieving 66 per cent fidelity to a target Bell state on demand. These realizations of a parity meter and a feedback-enabled deterministic measurement protocol provide key ingredients for active quantum error correction in the solid state.

  16. Deterministic Chaos in the X-ray Sources

    Indian Academy of Sciences (India)

    M. Grzedzielski; P. Sukova; A. Janiuk

    2015-12-01

    Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries – XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixedpoint solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.

  17. Electrocardiogram (ECG) pattern modeling and recognition via deterministic learning

    Institute of Scientific and Technical Information of China (English)

    Xunde DONG; Cong WANG; Junmin HU; Shanxing OU

    2014-01-01

    A method for electrocardiogram (ECG) pattern modeling and recognition via deterministic learning theory is presented in this paper. Instead of recognizing ECG signals beat-to-beat, each ECG signal which contains a number of heartbeats is recognized. The method is based entirely on the temporal features (i.e., the dynamics) of ECG patterns, which contains complete information of ECG patterns. A dynamical model is employed to demonstrate the method, which is capable of generating synthetic ECG signals. Based on the dynamical model, the method is shown in the following two phases:the identification (training) phase and the recognition (test) phase. In the identification phase, the dynamics of ECG patterns is accurately modeled and expressed as constant RBF neural weights through the deterministic learning. In the recognition phase, the modeling results are used for ECG pattern recognition. The main feature of the proposed method is that the dynamics of ECG patterns is accurately modeled and is used for ECG pattern recognition. Experimental studies using the Physikalisch-Technische Bundesanstalt (PTB) database are included to demonstrate the effectiveness of the approach.

  18. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  19. Deterministic doping and the exploration of spin qubits

    Energy Technology Data Exchange (ETDEWEB)

    Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)

    2015-01-09

    Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.

  20. Deterministic nature of the underlying dynamics of surface wind fluctuations

    Directory of Open Access Journals (Sweden)

    R. C. Sreelekshmi

    2012-10-01

    Full Text Available Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.

  1. Deterministic nature of the underlying dynamics of surface wind fluctuations

    Science.gov (United States)

    Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.

    2012-10-01

    Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.

  2. DNF Sparsification and a Faster Deterministic Counting Algorithm

    CERN Document Server

    Gopala, Parikshit; Reingold, Omer

    2012-01-01

    Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be $\\epsilon$-approximated by a width $w$ DNF with at most $(w\\log(1/\\epsilon))^{O(w)}$ terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic $n^{\\tilde{O}(\\log \\log(n))}$ time algorithm that computes an additive $\\epsilon$ approximation to the fraction of satisfying assignments of f for $\\epsilon = 1/\\poly(\\log n)$. The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of $n^{\\exp(O(\\sqrt{\\log \\log n}))}$.

  3. Using MCBEND for neutron or gamma-ray deterministic calculations

    Science.gov (United States)

    Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith

    2017-09-01

    MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  4. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  5. Evaluating strong measurement noise in data series with simulated annealing method

    CERN Document Server

    Carvalho, J; Haase, M; Lind, P G

    2013-01-01

    Many stochastic time series can be described by a Langevin equation composed of a deterministic and a stochastic dynamical part. Such a stochastic process can be reconstructed by means of a recently introduced nonparametric method, thus increasing the predictability, i.e. knowledge of the macroscopic drift and the microscopic diffusion functions. If the measurement of a stochastic process is affected by additional strong measurement noise, the reconstruction process cannot be applied. Here, we present a method for the reconstruction of stochastic processes in the presence of strong measurement noise, based on a suitably parametrized ansatz. At the core of the process is the minimization of the functional distance between terms containing the conditional moments taken from measurement data, and the corresponding ansatz functions. It is shown that a minimization of the distance by means of a simulated annealing procedure yields better results than a previously used Levenberg-Marquardt algorithm, which permits a...

  6. Origin of reverse annealing effect in hydrogen-implanted silicon

    Energy Technology Data Exchange (ETDEWEB)

    Di, Zengfeng [Los Alamos National Laboratory; Nastasi, Michael A [Los Alamos National Laboratory; Wang, Yongqiang [Los Alamos National Laboratory

    2009-01-01

    In contradiction to conventional damage annealing, thermally annealed H-implanted Si exhibits an increase in damage or reverse annealing behavior, whose mechanism has remained elusive. On the basis of quantitative high resolution transmission electron microscopy combined with channeling Rutherford backscattering analysis, we conclusively elucidate that the reverse annealing effect is due to the nucleation and growth of hydrogen-induce platelets. Platelets are responsible for an increase in the height and width the channeling damage peak following increased isochronal anneals.

  7. Desmoplastic variant of ameloblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Ick; Kim, Dong Youn; Choi, Karp Shik [Dept. of Dental Radiology, College of Dentistry, Kyungpook National University, Daegu (Korea, Republic of)

    1995-02-15

    Desmoplastic variant of ameloblastoma is new and unusual variant of ameloblastoma with extensive stromal desmoplastic proliferation. The authors experienced a case of desmoplastic variant of amleloblastoma with moderate-defined radiolucency on the right maxillary anterior area in 62-year-old female. As a result of careful analysis of clinical, radiological examinations, we diagnosed it as desmoplastic variant of ameloblastoma. The following results were obtained; 1. Main clinical symptoms were nontender bony swelling with normal intact overlying mucosa on the right maxillary anterior area. 2. Radiographically, moderate-defined, multilocular radioluceney on the right maxillary anterior area were shown, and severe cortical bony thinning and expansion to labial and palatal sides were also observed. And this lesion was shown to be extended to the right nasal cavity. 3. Histopathologically, follicle-like epithelial islands with densely abundant collagenous stroma were morphologically compressed.

  8. Stochastic annealing simulation of cascades in metals

    Energy Technology Data Exchange (ETDEWEB)

    Heinisch, H.L.

    1996-04-01

    The stochastic annealing simulation code ALSOME is used to investigate quantitatively the differential production of mobile vacancy and SIA defects as a function of temperature for isolated 25 KeV cascades in copper generated by MD simulations. The ALSOME code and cascade annealing simulations are described. The annealing simulations indicate that the above Stage V, where the cascade vacancy clusters are unstable,m nearly 80% of the post-quench vacancies escape the cascade volume, while about half of the post-quench SIAs remain in clusters. The results are sensitive to the relative fractions of SIAs that occur in small, highly mobile clusters and large stable clusters, respectively, which may be dependent on the cascade energy.

  9. Structural relaxation in annealed hyperquenched basaltic glasses

    DEFF Research Database (Denmark)

    Guo, Xiaoju; Mauro, John C.; Potuzak, M.

    2012-01-01

    The enthalpy relaxation behavior of hyperquenched (HQ) and annealed hyperquenched (AHQ) basaltic glass is investigated through calorimetric measurements. The results reveal a common onset temperature of the glass transition for all the HQ and AHQ glasses under study, indicating that the primary r...... relaxation is activated at the same temperature regardless of the initial departure from equilibrium. The analysis of secondary relaxation at different annealing temperatures provides insights into the enthalpy recovery of HQ glasses.......The enthalpy relaxation behavior of hyperquenched (HQ) and annealed hyperquenched (AHQ) basaltic glass is investigated through calorimetric measurements. The results reveal a common onset temperature of the glass transition for all the HQ and AHQ glasses under study, indicating that the primary...

  10. Deterministic single-file dynamics in collisional representation.

    Science.gov (United States)

    Marchesoni, F; Taloni, A

    2007-12-01

    We re-examine numerically the diffusion of a deterministic, or ballistic single file with preassigned velocity distribution (Jepsen's gas) from a collisional viewpoint. For a two-modal velocity distribution, where half the particles have velocity +/-c, the collisional statistics is analytically proven to reproduce the continuous time representation. For a three-modal velocity distribution with equal fractions, where less than 12 of the particles have velocity +/-c, with the remaining particles at rest, the collisional process is shown to be inhomogeneous; its stationary properties are discussed here by combining exact and phenomenological arguments. Collisional memory effects are then related to the negative power-law tails in the velocity autocorrelation functions, predicted earlier in the continuous time formalism. Numerical and analytical results for Gaussian and four-modal Jepsen's gases are also reported for the sake of a comparison.

  11. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko Dimitrov

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  12. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus;

    2007-01-01

    of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement......Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can...... with low pressure experiments. Furthermore, for intermediate pressures, noise-induced pattern formation, which has not been captured by earlier models, can be reproduced in stochastic simulations with the mesoscopic model....

  13. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  14. Deterministic Squeezed States with Joint Measurements and Feedback

    CERN Document Server

    Cox, Kevin C; Weiner, Joshua M; Thompson, James K

    2015-01-01

    We demonstrate the creation of entangled or spin-squeezed states using a joint measurement and real-time feedback. The pseudo-spin state of an ensemble of $N= 5\\times 10^4$ laser-cooled $^{87}$Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) (7.4(6) dB) in variance below the standard quantum limit for unentangled atoms -- comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint pre-measurement, we directly observe up to 59(8) times (17.7(6) dB) improvement in quantum phase variance relative to the standard quantum limit for $N=4\\times 10^5$ atoms. This is the largest reported entanglement enhancement to date in any system.

  15. Deterministic oscillatory search: a new meta-heuristic optimization algorithm

    Indian Academy of Sciences (India)

    N ARCHANA; R VIDHYAPRIYA; ANTONY BENEDICT; KARTHIK CHANDRAN

    2017-06-01

    The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical methods of optimization and global converging characteristics of meta-heuristic algorithms. Common traits of nature-inspired algorithms like randomness and tuning parameters (other than population size) are eliminated. The proposed algorithm is tested with mathematical benchmark functions and compared to other popular optimization algorithms. Theresults show that the proposed algorithm is superior in terms of robustness and problem solving capabilities to other algorithms. The paradigm is also applied to an engineering problem to prove its practicality. It is applied to find the optimal location of multi-type FACTS devices in a power system and tested in the IEEE 39 bus system and UPSEB 75 bus system. Results show better performance over other standard algorithms in terms of voltage stability, real power loss and sizing and cost of FACTS devices.

  16. Scattering of electromagnetic light waves from a deterministic anisotropic medium

    Science.gov (United States)

    Li, Jia; Chang, Liping; Wu, Pinghui

    2015-11-01

    Based on the weak scattering theory of electromagnetic waves, analytical expressions are derived for the spectral densities and degrees of polarization of an electromagnetic plane wave scattered from a deterministic anisotropic medium. It is shown that the normalized spectral densities of scattered field is highly dependent of changes of the scattering angle and degrees of polarization of incident plane waves. The degrees of polarization of scattered field are also subjective to variations of these parameters. In addition, the anisotropic effective radii of the dielectric susceptibility can lead essential influences on both spectral densities and degrees of polarization of scattered field. They are highly dependent of the effective radii of the medium. The obtained results may be applicable to determine anisotropic parameters of medium by quantitatively measuring statistics of a far-zone scattered field.

  17. Connection between stochastic and deterministic modelling of microbial growth.

    Science.gov (United States)

    Kutalik, Zoltán; Razaz, Moe; Baranyi, József

    2005-01-21

    We present in this paper various links between individual and population cell growth. Deterministic models of the lag and subsequent growth of a bacterial population and their connection with stochastic models for the lag and subsequent generation times of individual cells are analysed. We derived the individual lag time distribution inherent in population growth models, which shows that the Baranyi model allows a wide range of shapes for individual lag time distribution. We demonstrate that individual cell lag time distributions cannot be retrieved from population growth data. We also present the results of our investigation on the effect of the mean and variance of the individual lag time and the initial cell number on the mean and variance of the population lag time. These relationships are analysed theoretically, and their consequence for predictive microbiology research is discussed.

  18. Deterministic Computational Complexity of the Quantum Separability Problem

    CERN Document Server

    Ioannou, L M

    2006-01-01

    Ever since entanglement was identified as a computational and cryptographic resource, effort has been made to find an efficient way to tell whether a given density matrix represents an unentangled, or separable, state. Essentially, this is the quantum separability problem. In Section 1, I begin with a brief introduction to bipartite separability and entanglement, and a basic formal definition of the quantum separability problem. I conclude with a summary of one-sided tests for separability, including those involving semidefinite programming. In Section 2, I treat the separability problem as a computational decision problem and motivate its approximate formulations. After a review of basic complexity-theoretic notions, I discuss the computational complexity of the separability problem (including a Turing-NP-complete formulation of the problem and a proof of "strong NP-hardness" (based on a new NP-hardness proof by Gurvits)). In Section 3, I give a comprehensive survey and complexity analysis of deterministic a...

  19. Deterministic Aided STAP for Target Detection in Heterogeneous Situations

    Directory of Open Access Journals (Sweden)

    J.-F. Degurse

    2013-01-01

    Full Text Available Classical space-time adaptive processing (STAP detectors are strongly limited when facing highly heterogeneous environments. Indeed, in this case, representative target free data are no longer available. Single dataset algorithms, such as the MLED algorithm, have proved their efficiency in overcoming this problem by only working on primary data. These methods are based on the APES algorithm which removes the useful signal from the covariance matrix. However, a small part of the clutter signal is also removed from the covariance matrix in this operation. Consequently, a degradation of clutter rejection performance is observed. We propose two algorithms that use deterministic aided STAP to overcome this issue of the single dataset APES method. The results on realistic simulated data and real data show that these methods outperform traditional single dataset methods in detection and in clutter rejection.

  20. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  1. Simple deterministic dynamical systems with fractal diffusion coefficients

    CERN Document Server

    Klages, R

    1999-01-01

    We analyze a simple model of deterministic diffusion. The model consists of a one-dimensional periodic array of scatterers in which point particles move from cell to cell as defined by a piecewise linear map. The microscopic chaotic scattering process of the map can be changed by a control parameter. This induces a parameter dependence for the macroscopic diffusion coefficient. We calculate the diffusion coefficent and the largest eigenmodes of the system by using Markov partitions and by solving the eigenvalue problems of respective topological transition matrices. For different boundary conditions we find that the largest eigenmodes of the map match to the ones of the simple phenomenological diffusion equation. Our main result is that the difffusion coefficient exhibits a fractal structure by varying the system parameter. To understand the origin of this fractal structure, we give qualitative and quantitative arguments. These arguments relate the sequence of oscillations in the strength of the parameter-dep...

  2. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  3. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  4. Testing for chaos in deterministic systems with noise

    Science.gov (United States)

    Gottwald, Georg A.; Melbourne, Ian

    2005-12-01

    Recently, we introduced a new test for distinguishing regular from chaotic dynamics in deterministic dynamical systems and argued that the test had certain advantages over the traditional test for chaos using the maximal Lyapunov exponent. In this paper, we investigate the capability of the test to cope with moderate amounts of noisy data. Comparisons are made between an improved version of our test and both the “tangent space method” and “direct method” for computing the maximal Lyapunov exponent. The evidence of numerical experiments, ranging from the logistic map to an eight-dimensional Lorenz system of differential equations (the Lorenz 96 system), suggests that our method is superior to tangent space methods and that it compares very favourably with direct methods.

  5. Capillary-mediated interface perturbations: Deterministic pattern formation

    Science.gov (United States)

    Glicksman, Martin E.

    2016-09-01

    Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.

  6. Deterministic simulation of thermal neutron radiography and tomography

    Science.gov (United States)

    Pal Chowdhury, Rajarshi; Liu, Xin

    2016-05-01

    In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.

  7. Entanglement and deterministic quantum computing with one qubit

    Science.gov (United States)

    Boyer, Michel; Brodutch, Aharon; Mor, Tal

    2017-02-01

    The role of entanglement and quantum correlations in complex physical systems and quantum information processing devices has become a topic of intense study in the past two decades. In this work we present tools for learning about entanglement and quantum correlations in dynamical systems where the quantum states are mixed and the eigenvalue spectrum is highly degenerate. We apply these results to the deterministic quantum computing with one qubit (DQC1) computation model and show that the states generated in a DQC1 circuit have an eigenvalue structure that makes them difficult to entangle, even when they are relatively far from the completely mixed state. Our results strengthen the conjecture that it may be possible to find quantum algorithms that do not generate entanglement and yet still have an exponential advantage over their classical counterparts.

  8. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  9. Distributed Design of a Central Service to Ensure Deterministic Behavior

    Directory of Open Access Journals (Sweden)

    Imran Ali Jokhio

    2012-10-01

    Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.

  10. Deterministic superresolution with coherent states at the shot noise limit

    DEFF Research Database (Denmark)

    Distante, Emanuele; Jezek, Miroslav; Andersen, Ulrik L.

    2013-01-01

    detection approaches. Here we show that superresolving phase measurements at the shot noise limit can be achieved without resorting to nonclassical optical states or to low-efficiency detection processes. Using robust coherent states of light, high-efficiency homodyne detection, and a deterministic......Interference of light fields plays an important role in various high-precision measurement schemes. It has been shown that superresolving phase measurements beyond the standard coherent state limit can be obtained either by using maximally entangled multiparticle states of light or using complex...... binarization processing technique, we show a narrowing of the interference fringes that scales with 1/√N where N is the mean number of photons of the coherent state. Experimentally we demonstrate a 12-fold narrowing at the shot noise limit....

  11. Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms

    Science.gov (United States)

    Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui

    2013-10-01

    The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.

  12. Quantum annealing correction with minor embedding

    Science.gov (United States)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  13. Hemoglobin Variants in Mice

    Energy Technology Data Exchange (ETDEWEB)

    Popp, Raymond A.

    1965-04-22

    Variability among mammalian hemoglobins was observed many years ago (35). The chemical basis for differences among hemoglobins from different species of mammals has been studied by several investigators (5, 11, 18, 48). As well as interspecies differences, hemoglobin variants are frequently found within a species of mammals (2, 3, 7, 16) The inheritance of these intraspecies variants can be studied, and pedigrees indicate that the type of hemoglobin synthesized in an individual is genetically controlled (20). Several of the variant human hemoglobins are f'unctionally deficient (7, 16). Such hemoglobin anomalies are of basic interest to man because of the vital role of hemoglobin for transporting oxygen to all tissues of the body.

  14. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    Science.gov (United States)

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  15. Annealing study of a bistable cluster defect

    Energy Technology Data Exchange (ETDEWEB)

    Junkes, Alexandra, E-mail: alexandra.junkes@desy.d [Institute for Experimental Physics, University of Hamburg, 22761 Hamburg (Germany); Eckstein, Doris [Institute for Experimental Physics, University of Hamburg, 22761 Hamburg (Germany); Pintilie, Ioana [Institute for Experimental Physics, University of Hamburg, 22761 Hamburg (Germany); NIMP Bucharest-Margurele (Romania); Makarenko, Leonid F. [Belarusian State University, Minsk (Belarus); Fretwurst, Eckhart [Institute for Experimental Physics, University of Hamburg, 22761 Hamburg (Germany)

    2010-01-11

    This work deals with the influence of neutron and proton induced cluster related defects on the properties of n-type silicon detectors. Defect concentrations were obtained by means of Deep Level Transient Spectroscopy (DLTS) and Thermally Stimulated Current (TSC) technique, while the full depletion voltage and the reverse current were extracted from capacitance-voltage (C-V) and current-voltage (I-V) characteristics. The annealing behaviour of the reverse current can be correlated with the annealing of the cluster related defect levels labeled E4a and E4b by making use of their bistability. This bistability was characterised by isochronal and isothermal annealing studies and it was found that the development with increasing annealing temperature is similar to that of divacancies. This supports the assumption that E4a and E4b are vacancy related defects. In addition we observe an influence of the disordered regions on the shape and height of the DLTS or TSC signals corresponding to point defects like the vacancy-oxygen complex.

  16. Thermal annealing in neutron-irradiated tribromobenzenes

    DEFF Research Database (Denmark)

    Siekierska, K.E.; Halpern, A.; Maddock, A. G.

    1968-01-01

    The distribution of 82Br among various products in neutron-irradiated isomers of tribromobenzene has been investigated, and the effect of thermal annealing examined. Reversed-phase partition chromatography was employed for the determination of radioactive organic products, and atomic bromine...

  17. Fast Algorithm for Finding Unicast Capacity of Linear Deterministic Wireless Relay Networks

    CERN Document Server

    Shi, Cuizhu

    2009-01-01

    The deterministic channel model for wireless relay networks proposed by Avestimehr, Diggavi and Tse `07 has captured the broadcast and inference nature of wireless communications and has been widely used in approximating the capacity of wireless relay networks. The authors generalized the max-flow min-cut theorem to the linear deterministic wireless relay networks and characterized the unicast capacity of such deterministic network as the minimum rank of all the binary adjacency matrices describing source-destination cuts whose number grows exponentially with the size of the network. In this paper, we developed a fast algorithm for finding the unicast capacity of a linear deterministic wireless relay network by finding the maximum number of linearly independent paths using the idea of path augmentation. We developed a modified depth-first search algorithm tailored for the linear deterministic relay networks for finding linearly independent paths whose total number proved to equal the unicast capacity of the u...

  18. Chaos theory as a bridge between deterministic and stochastic views for hydrologic modeling

    Science.gov (United States)

    Sivakumar, B.

    2009-04-01

    Two modeling approaches are prevalent in hydrology: deterministic and stochastic. The deterministic approach may be supported on the basis of the ‘permanent' nature of the ocean-earth-atmosphere structure and the ‘cyclical' nature of mechanisms that take place within it. The stochastic approach may be favored because of the ‘highly irregular and complex nature' of hydrologic phenomena and our ‘limited ability to observe' the detailed variations. With these two contrasting concepts, asking the question whether hydrologic phenomena are better modeled using a deterministic approach or a stochastic approach is meaningless. In fact, for most (if not all) hydrologic phenomena, both the deterministic approach and the stochastic approach are complementary to each other. This may be supported by our observation of both ‘deterministic' and ‘random' nature of hydrologic phenomena at ‘one or more scales' in time and/or space; for instance, there exists a significant deterministic nature in river flow in the form of seasonality and annual cycle, whereas the interactions of the various mechanisms involved in the river flow phenomenon and their various degrees of nonlinearity bring randomness. It is reasonable, therefore, to argue that use of an integrated modeling approach that incorporates both the deterministic and the stochastic components will produce greater success compared to either a deterministic approach or a stochastic approach independently. This study discusses the role of chaos theory as a potential avenue to the formulation of an integrated deterministic-stochastic approach. Through presentation of its fundamental principles (nonlinear interdependence, hidden determinism and order, sensitivity to initial conditions) and their relevance in hydrologic systems, the study contends that chaos theory can serve as a bridge between the deterministic and stochastic ‘extreme' views and offer a ‘middle-ground' approach. Specific examples of chaos theory

  19. Influence of time of annealing on anneal hardening effect of a cast CuZn alloy

    Directory of Open Access Journals (Sweden)

    Nestorović Svetlana

    2003-01-01

    Full Text Available Investigated cast copper alloy containing 8at%Zn of a solute. For comparison parallel specimens made from cast pure copper. Copper and copper alloy were subjected to cold rolling with different a final reduction of 30,50 and 70%. The cold rolled copper and copper alloy samples were isochronally and isothermally annealed up to recrystallization temperature. After that the values of hardness, strength and electrical conductivity were measured and X-ray analysis was performed. These investigations show that anneal hardening effect at alloys was attained under recrystallization temperature in the temperature range of 180-3000C, followed with an increase in hardness. The amount of strengthening increase with increasing degree of prior cold work. Also the X-ray analysis show the change of lattice parameter during annealing when anneal hardening effect was attained.

  20. Simulation of Broadband Time Histories Combining Deterministic and Stochastic Methodologies

    Science.gov (United States)

    Graves, R. W.; Pitarka, A.

    2003-12-01

    We present a methodology for generating broadband (0 - 10 Hz) ground motion time histories using a hybrid technique that combines a stochastic approach at high frequencies with a deterministic approach at low frequencies. Currently, the methodology is being developed for moderate and larger crustal earthquakes, although the technique can theoretically be applied to other classes of events as well. The broadband response is obtained by summing the separate responses in the time domain using matched butterworth filters centered at 1 Hz. We use a kinematic description of fault rupture, incorporating spatial heterogeneity in slip, rupture velocity and rise time by discretizing an extended finite-fault into a number of smaller subfaults. The stochastic approach sums the response for each subfault assuming a random phase, an omega-squared source spectrum and simplified Green's functions (Boore, 1983). Gross impedance effects are incorporated using quarter wavelength theory (Boore and Joyner, 1997) to bring the response to a generic baserock level (e.g., Vs = 1000 m/s). The deterministic approach sums the response for many point sources distributed across each subfault. Wave propagation is modeled using a 3D viscoelastic finite difference algorithm with the minimum shear wave velocity set at 620 m/s. Short- and mid-period amplification factors provided by Borcherdt (1994) are used to develop frequency dependent site amplification functions. The amplification functions are applied to the stochastic and determinsitic responses separately since these may have different (computational) reference site velocities. The site velocity is taken as the measured or estimated value of {Vs}30. The use of these amplification factors is attractive because they account for non-linear response by considering the input acceleration level. We note that although these design factors are strictly defined for response spectra, we have applied them to the Fourier amplitude spectra of our

  1. An Effect of Annealing on Shielding Properties of Shungite

    Science.gov (United States)

    Belousova, E. S.; Mahmoud, M. Sh.; Lynkou, L. M.

    2013-05-01

    Annealing of shungite is studied in oxidizing conditions in a chamber with NH4Cl, and in vacuum at 900 °C for 2h. Frequency dependencies of transmission and reflection coefficients of annealed shungite are measured in the frequency range of 8-12 GHz. The minimum reflection at 8-10 GHz was shown for shungite annealed in the oxidizing atmosphere.

  2. Stochastic versus Deterministic Approach to Coordinated Supply Chain Scheduling

    Directory of Open Access Journals (Sweden)

    Tadeusz Sawik

    2017-01-01

    Full Text Available The purpose of this paper is to consider coordinated selection of supply portfolio and scheduling of production and distribution in supply chains under regional and local disruption risks. Unlike many papers that assume the all-or-nothing supply disruption pattern, in this paper, only the regional disruptions belong to the all-or-nothing disruption category, while for the local disruptions all disruption levels can be considered. Two biobjective decision-making models, stochastic, based on the wait-and-see approach, and deterministic, based on the expected value approach, are proposed and compared to optimize the trade-off between expected cost and expected service. The main findings indicate that the stochastic programming wait-and-see approach with its ability to handle uncertainty by probabilistic scenarios of disruption events and the much simpler expected value problem, in which the random parameters are replaced by their expected values, lead to similar expected performance of a supply chain under multilevel disruptions. However, the stochastic approach, which accounts for all potential disruption scenarios, leads to a more diversified supply portfolio that will hedge against a variety of scenarios.

  3. Deterministic quantum nonlinear optics with single atoms and virtual photons

    Science.gov (United States)

    Kockum, Anton Frisk; Miranowicz, Adam; Macrı, Vincenzo; Savasta, Salvatore; Nori, Franco

    2017-06-01

    We show how analogs of a large number of well-known nonlinear-optics phenomena can be realized with one or more two-level atoms coupled to one or more resonator modes. Through higher-order processes, where virtual photons are created and annihilated, an effective deterministic coupling between two states of such a system can be created. In this way, analogs of three-wave mixing, four-wave mixing, higher-harmonic and -subharmonic generation (i.e., up- and down-conversion), multiphoton absorption, parametric amplification, Raman and hyper-Raman scattering, the Kerr effect, and other nonlinear processes can be realized. In contrast to most conventional implementations of nonlinear optics, these analogs can reach unit efficiency, only use a minimal number of photons (they do not require any strong external drive), and do not require more than two atomic levels. The strength of the effective coupling in our proposed setups becomes weaker the more intermediate transition steps are needed. However, given the recent experimental progress in ultrastrong light-matter coupling and improvement of coherence times for engineered quantum systems, especially in the field of circuit quantum electrodynamics, we estimate that many of these nonlinear-optics analogs can be realized with currently available technology.

  4. Deterministic Polynomial-Time Algorithms for Designing Short DNA Words

    CERN Document Server

    Kao, Ming-Yang; Sun, He; Zhang, Yong

    2012-01-01

    Designing short DNA words is a problem of constructing a set (i.e., code) of n DNA strings (i.e., words) with the minimum length such that the Hamming distance between each pair of words is at least k and the n words satisfy a set of additional constraints. This problem has applications in, e.g., DNA self-assembly and DNA arrays. Previous works include those that extended results from coding theory to obtain bounds on code and word sizes for biologically motivated constraints and those that applied heuristic local searches, genetic algorithms, and randomized algorithms. In particular, Kao, Sanghi, and Schweller (2009) developed polynomial-time randomized algorithms to construct n DNA words of length within a multiplicative constant of the smallest possible word length (e.g., 9 max{log n, k}) that satisfy various sets of constraints with high probability. In this paper, we give deterministic polynomial-time algorithms to construct DNA words based on derandomization techniques. Our algorithms can construct n DN...

  5. Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sun Mi; Kim, Ji Hwan [KHNP Central Research Institute, Daejeon (Korea, Republic of); Seok, Ho [KEPCO Engineering and Construction, Daejeon (Korea, Republic of)

    2016-10-15

    An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics.

  6. A Modified Deterministic Model for Reverse Supply Chain in Manufacturing

    Directory of Open Access Journals (Sweden)

    R. N. Mahapatra

    2013-01-01

    Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.

  7. The dynamical system of weathering: deterministic and stochastic analysis

    Science.gov (United States)

    Calabrese, S.; Parolari, A.; Porporato, A. M.

    2016-12-01

    The critical zone is fundamental to human society as it provides most of the ecosystem services such as food and fresh water. However, climate change and intense land use are threatening the critical zone, so that theoretical frameworks, to predict its future response, are needed. In this talk, a new modeling approach to evaluate the effect of hydrologic fluctuations on soil water chemistry and weathering reactions is analyzed by means of a dynamical system approach. In this model, equilibrium is assumed for the aqueous carbonate system while a kinetic law is adopted for the weathering reaction. Also, through an algebraic manipulation, we eliminate the equilibrium reactions and reduce the order of the system. We first analyze the deterministic temporal evolution, and study the stability of the nonlinear system and its trajectories, as a function of the hydro-climatic parameters. By introducing a stochastic rainfall forcing, we then analyze the system probabilistically, and through averaging techniques determine the inter-annual response of the nonlinear stochastic system to the climatic regime and hydrologic parameters (e.g., ET, soil texture). Some fundamental thermodynamic aspects of the chemical reactions are also discussed. By introducing the weathering reaction into the system, any mineral, such as calcium carbonate or a silicate mineral, can be considered.

  8. Fractionation by shape in deterministic lateral displacement microfluidic devices

    CERN Document Server

    Jiang, Mingliang; Drazer, German

    2014-01-01

    We investigate the migration of particles of different geometrical shapes and sizes in a scaled-up model of a gravity-driven deterministic lateral displacement (g-DLD) device. Specifically, particles move through a square array of cylindrical posts as they settle under the action of gravity. We performed experiments that cover a broad range of orientations of the driving force (gravity) with respect to the columns (or rows) in the square array of posts. We observe that as the forcing angle increases particles initially locked to move parallel to the columns in the array begin to move across the columns of obstacles and migrate at angles different from zero. We measure the probability that a particle would move across a column of obstacles, and define the critical angle {\\theta}c as the forcing angle at which this probability is 1/2. We show that critical angle depends both on particle size and shape, thus enabling both size- and shape-based separations. Finally, we show that using the diameter of the inscribe...

  9. Three-dimensional gravity-driven deterministic lateral displacement

    CERN Document Server

    Du, Siqi

    2016-01-01

    We present a simple solution to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move in the plane perpendicular to the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems. In fact, a theoretical model originally developed for force-driven, two-dimensional DLD systems accurately describes the experimental results. More importantly, we analyze the particles out-of-plane motion and observe that, for certain orientations of the driving force, significant differences in the out-of-plane displacement depending on particle siz...

  10. Deterministic versus evidence-based attitude towards clinical diagnosis.

    Science.gov (United States)

    Soltani, Akbar; Moayyeri, Alireza

    2007-08-01

    Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.

  11. Entrepreneurs, chance, and the deterministic concentration of wealth.

    Directory of Open Access Journals (Sweden)

    Joseph E Fargione

    Full Text Available In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.

  12. Histone variants and lipid metabolism

    NARCIS (Netherlands)

    Borghesan, Michela; Mazzoccoli, Gianluigi; Sheedfar, Fareeba; Oben, Jude; Pazienza, Valerio; Vinciguerra, Manlio

    2014-01-01

    Within nucleosomes, canonical histones package the genome, but they can be opportunely replaced with histone variants. The incorporation of histone variants into the nucleosome is a chief cellular strategy to regulate transcription and cellular metabolism. In pathological terms, cellular steatosis

  13. An image reconstruction algorithm for electrical capacitance tomography based on simulated annealing particle swarm optimization

    Directory of Open Access Journals (Sweden)

    P. Wang

    2015-04-01

    Full Text Available In this paper, we introduce a novel image reconstruction algorithm with Least Squares Support Vector Machines (LS-SVM and Simulated Annealing Particle Swarm Optimization (APSO, named SAP. This algorithm introduces simulated annealing ideas into Particle Swarm Optimization (PSO, which adopts cooling process functions to replace the inertia weight function and constructs the time variant inertia weight function featured in annealing mechanism. Meanwhile, it employs the APSO procedure to search for the optimized resolution of Electrical Capacitance Tomography (ECT for image reconstruction. In order to overcome the soft field characteristics of ECT sensitivity field, some image samples with typical flow patterns are chosen for training with LS-SVM. Under the training procedure, the capacitance error caused by the soft field characteristics is predicted, and then is used to construct the fitness function of the particle swarm optimization on basis of the capacitance error. Experimental results demonstrated that the proposed SAP algorithm has a quick convergence rate. Moreover, the proposed SAP outperforms the classic Landweber algorithm and Newton-Raphson algorithm on image reconstruction.

  14. DESIGNING A DIFFRACTIVE OPTICAL ELEMENT FOR CONTROLLING THE BEAM PROFILE IN A THREE-DIMENSIONAL SPACE USING THE SIMULATED ANNEALING ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    LIANG WEN-XI; ZHANG JING-JUAN; L(U) JUN-FENG; LIAO RUI

    2001-01-01

    We have designed a spatially quantized diffractive optical element (DOE) for controlling the beam profile in a three-dimensional space with the help of the simulated annealing (SA) algorithm. In this paper, we investigate the annealing schedule and the neighbourhood which are the deterministic parameters of the process that warrant the quality of the SA algorithm. The algorithm is employed to solve the discrete stochastic optimization problem of the design of a DOE. The objective function which constrains the optimization is also studied. The computed results demonstrate that the procedure of the algorithm converges stably to an optimal solution close to the global optimum with an acceptable computing time. The results meet the design requirement well and are applicable.

  15. Migraine Variants And Beyond

    Directory of Open Access Journals (Sweden)

    Chakravarty A

    2002-01-01

    Full Text Available The Classic presenting features of both migraine with and without aura have been clearly defined. Occasionally however migrainous headaches are accompanied by abrupt appearance of focal and ominous neurological signs. Such attacks can be labelled as migraine variants and the diagnosis in reality is one made by exclusion of other CNS diseases. Some but not all such conditions are mentioned in the International Headache Society (IHS classification under the general heading of migraine with aura. Rarely, the focal neurological deficit may outlast the migraine attack by days and occasionally with appearance of structural brain lesions on neuroimaging. Such attacks have been labelled as complicated Migraine by the IHS. The present review deal with the clinical, radiologic and pathophysiologic aspects of both these conditions - migraine variants and complicated migraine.

  16. A unified controllability/observability theory for some stochastic and deterministic partial differential equations

    CERN Document Server

    Zhang, Xu

    2010-01-01

    The purpose of this paper is to present a universal approach to the study of controllability/observability problems for infinite dimensional systems governed by some stochastic/deterministic partial differential equations. The crucial analytic tool is a class of fundamental weighted identities for stochastic/deterministic partial differential operators, via which one can derive the desired global Carleman estimates. This method can also give a unified treatment of the stabilization, global unique continuation, and inverse problems for some stochastic/deterministic partial differential equations.

  17. Deterministic chaos in government debt dynamics with mechanistic primary balance rules

    CERN Document Server

    Lindgren, Jussi Ilmari

    2011-01-01

    This paper shows that with mechanistic primary budget rules and with some simple assumptions on interest rates the well-known debt dynamics equation transforms into the infamous logistic map. The logistic map has very peculiar and rich nonlinear behaviour and it can exhibit deterministic chaos with certain parameter regimes. Deterministic chaos means the existence of the butterfly effect which in turn is qualitatively very important, as it shows that even deterministic budget rules produce unpredictable behaviour of the debt-to-GDP ratio, as chaotic systems are extremely sensitive to initial conditions.

  18. Deterministic Joint Remote Preparation of an Arbitrary Two-Qubit State Using the Cluster State

    Institute of Scientific and Technical Information of China (English)

    WANG Ming-Ming; CHEN Xiu-Bo; YANG Yi-Xian

    2013-01-01

    Recently,deterministic joint remote state preparation (JRSP) schemes have been proposed to achieve 100% success probability.In this paper,we propose a new version of deterministic JRSP scheme of an arbitrary two-qubit state by using the six-qubit cluster state as shared quantum resource.Compared with previous schemes,our scheme has high efficiency since less quantum resource is required,some additional unitary operations and measurements are unnecessary.We point out that the existing two types of deterministic JRSP schemes based on GHZ states and EPR pairs are equivalent.

  19. Optimization Via Open System Quantum Annealing

    Science.gov (United States)

    2016-01-07

    Error corrected quantum annealing with hundreds of qubits, (07 2013) Siddhartha Santra, Gregory Quiroz , Greg Ver Steeg, Daniel Lidar. MAX 2-SAT...Zhihui Wang, Tameem Albash, Iman Marvian, and Walter Vinci Graduate Students: Gregory Quiroz , Kristen Pudenz, Anurag Mishra, Milad Marvian... Quiroz , G. Ver Steeg, D.A. Lidar, “MAX 2-SAT with up to 108 qubits”, New J. Phys. 16, 045006 (2014). 20. S. Boixo, T. Ronnow, S. Isakov, Z. Wang

  20. Simulated annealing algorithm for optimal capital growth

    Science.gov (United States)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  1. Variants of Uncertainty

    Science.gov (United States)

    1981-05-15

    Variants of Uncertainty Daniel Kahneman University of British Columbia Amos Tversky Stanford University DTI-C &%E-IECTE ~JUNO 1i 19 8 1j May 15, 1981... Dennett , 1979) in which different parts have ac- cess to different data, assign then different weights and hold different views of the situation...2robable and t..h1 provable. Oxford- Claredor Press, 1977. Dennett , D.C. Brainstorms. Hassocks: Harvester, 1979. Donchin, E., Ritter, W. & McCallum, W.C

  2. Annealing free magnetic tunnel junction sensors

    Science.gov (United States)

    Knudde, S.; Leitao, D. C.; Cardoso, S.; Freitas, P. P.

    2017-04-01

    Annealing is a major step in the fabrication of magnetic tunnel junctions (MTJs). It sets the exchange bias between the pinned and antiferromagnetic layers, and helps to increase the tunnel magnetoresistance (TMR) in both amorphous and crystalline junctions. Recent research on MTJs has focused on MgO-based structures due to their high TMR. However, the strict process control and mandatory annealing step can limit the scope of the application of these structures as sensors. In this paper, we present AlOx-based MTJs that are produced by ion beam sputtering and remote plasma oxidation and show optimum transport properties with no annealing. The microfabricated devices show TMR values of up to 35% and using NiFe/CoFeB free layers provides tunable linear ranges, leading to coercivity-free linear responses with sensitivities of up to 5.5%/mT. The top-pinned synthetic antiferromagnetic reference shows a stability of about 30 mT in the microfabricated devices. Sensors with linear ranges of up to 60 mT are demonstrated. This paves the way for the integration of MTJ sensors in heat-sensitive applications such as flexible substrates, or for the design of low-footprint on-chip multiaxial sensing devices.

  3. Variants of glycoside hydrolases

    Energy Technology Data Exchange (ETDEWEB)

    Teter, Sarah; Ward, Connie; Cherry, Joel; Jones, Aubrey; Harris, Paul; Yi, Jung

    2017-07-11

    The present invention relates to variants of a parent glycoside hydrolase, comprising a substitution at one or more positions corresponding to positions 21, 94, 157, 205, 206, 247, 337, 350, 373, 383, 438, 455, 467, and 486 of amino acids 1 to 513 of SEQ ID NO: 2, and optionally further comprising a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2 a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2, wherein the variants have glycoside hydrolase activity. The present invention also relates to nucleotide sequences encoding the variant glycoside hydrolases and to nucleic acid constructs, vectors, and host cells comprising the nucleotide sequences.

  4. Variants of glycoside hydrolases

    Energy Technology Data Exchange (ETDEWEB)

    Teter, Sarah (Davis, CA); Ward, Connie (Hamilton, MT); Cherry, Joel (Davis, CA); Jones, Aubrey (Davis, CA); Harris, Paul (Carnation, WA); Yi, Jung (Sacramento, CA)

    2011-04-26

    The present invention relates to variants of a parent glycoside hydrolase, comprising a substitution at one or more positions corresponding to positions 21, 94, 157, 205, 206, 247, 337, 350, 373, 383, 438, 455, 467, and 486 of amino acids 1 to 513 of SEQ ID NO: 2, and optionally further comprising a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2 a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2, wherein the variants have glycoside hydrolase activity. The present invention also relates to nucleotide sequences encoding the variant glycoside hydrolases and to nucleic acid constructs, vectors, and host cells comprising the nucleotide sequences.

  5. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    Science.gov (United States)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  6. Deterministic and heuristic models of forecasting spare parts demand

    Directory of Open Access Journals (Sweden)

    Ivan S. Milojević

    2012-04-01

    Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing

  7. Accurate deterministic solutions for the classic Boltzmann shock profile

    Science.gov (United States)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  8. Activity modes selection for project crashing through deterministic simulation

    Directory of Open Access Journals (Sweden)

    Ashok Mohanty

    2011-12-01

    Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.

  9. Graphics development of DCOR: Deterministic combat model of Oak Ridge

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, G. [Georgia Inst. of Tech., Atlanta, GA (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  10. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering

    Science.gov (United States)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  11. Comparing Monte Carlo methods for finding ground states of Ising spin glasses: Population annealing, simulated annealing, and parallel tempering.

    Science.gov (United States)

    Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G

    2015-07-01

    Population annealing is a Monte Carlo algorithm that marries features from simulated-annealing and parallel-tempering Monte Carlo. As such, it is ideal to overcome large energy barriers in the free-energy landscape while minimizing a Hamiltonian. Thus, population-annealing Monte Carlo can be used as a heuristic to solve combinatorial optimization problems. We illustrate the capabilities of population-annealing Monte Carlo by computing ground states of the three-dimensional Ising spin glass with Gaussian disorder, while comparing to simulated-annealing and parallel-tempering Monte Carlo. Our results suggest that population annealing Monte Carlo is significantly more efficient than simulated annealing but comparable to parallel-tempering Monte Carlo for finding spin-glass ground states.

  12. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    Science.gov (United States)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  13. A new procedure for characterizing textured surfaces with a deterministic pattern of valley features

    DEFF Research Database (Denmark)

    Godi, Alessandro; Kühle, A; De Chiffre, Leonardo

    2013-01-01

    In recent years there has been the development of a high number of manufacturing methods for creating textured surfaces which often present deterministic patterns of valley features. Unfortunately, suitable methodologies for characterizing them are lacking. Existing standards cannot in fact...

  14. Deterministic sensitivity analysis for first-order Monte Carlo simulations: a technical note.

    Science.gov (United States)

    Geisler, Benjamin P; Siebert, Uwe; Gazelle, G Scott; Cohen, David J; Göhler, Alexander

    2009-01-01

    Monte Carlo microsimulations have gained increasing popularity in decision-analytic modeling because they can incorporate discrete events. Although deterministic sensitivity analyses are essential for interpretation of results, it remains difficult to combine these alongside Monte Carlo simulations in standard modeling packages without enormous time investment. Our purpose was to facilitate one-way deterministic sensitivity analysis of TreeAge Markov state-transition models requiring first-order Monte Carlo simulations. Using TreeAge Pro Suite 2007 and Microsoft Visual Basic for EXCEL, we constructed a generic script that enables one to perform automated deterministic one-way sensitivity analyses in EXCEL employing microsimulation models. In addition, we constructed a generic EXCEL-worksheet that allows for use of the script with little programming knowledge. Linking TreeAge Pro Suite 2007 and Visual Basic enables the performance of deterministic sensitivity analyses of first-order Monte Carlo simulations. There are other potentially interesting applications for automated analysis.

  15. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  16. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...

  17. Analysis of the deterministic and stochastic SIRS epidemic models with nonlinear incidence

    Science.gov (United States)

    Liu, Qun; Chen, Qingmei

    2015-06-01

    In this paper, the deterministic and stochastic SIRS epidemic models with nonlinear incidence are introduced and investigated. For deterministic system, the basic reproductive number R0 is obtained. Furthermore, if R0 ≤ 1, then the disease-free equilibrium is globally asymptotically stable and if R0 > 1, then there is a unique endemic equilibrium which is globally asymptotically stable. For stochastic system, to begin with, we verify that there is a unique global positive solution starting from the positive initial value. Then when R0 > 1, we prove that stochastic perturbations may lead the disease to extinction in scenarios where the deterministic system is persistent. When R0 ≤ 1, a result on fluctuation of the solution around the disease-free equilibrium of deterministic model is obtained under appropriate conditions. At last, if the intensity of the white noise is sufficiently small and R0 > 1, then there is a unique stationary distribution to stochastic system.

  18. Deterministic chaos in RL-diode circuits and its application in metrology

    Science.gov (United States)

    Kucheruk, Volodymyr; Katsyv, Samuil; Glushko, Mykhailo; Wójcik, Waldemar; Zyska, Tomasz; Taissariyeva, Kyrmyzy; Mussabekov, Kanat

    2016-09-01

    The paper investigated the possibility of measuring the resistive physical quantity generator using deterministic chaos based RL-diode circuit. A generalized structure of the measuring device using a deterministic chaos signal generator. To separate the useful component of the measurement signal of amplitude detector is proposed to use. Mathematical modeling of the RL-diode circuit, which showed a significant effect of the barrier and diffusion capacity of the diode on the occurrence of deterministic chaotic oscillations in this circuit. It is shown that this type deterministic chaos signal generator has a high sensitivity to a change in output voltage resistance in the range of 250 Ohms, which can be used to create the measuring devices based on it.

  19. A unified controllability/observability theory for some stochastic and deterministic partial differential equations

    OpenAIRE

    2010-01-01

    The purpose of this paper is to present a universal approach to the study of controllability/observability problems for infinite dimensional systems governed by some stochastic/deterministic partial differential equations. The crucial analytic tool is a class of fundamental weighted identities for stochastic/deterministic partial differential operators, via which one can derive the desired global Carleman estimates. This method can also give a unified treatment of the stabilization, global un...

  20. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A. F.; Roussin, R. W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  1. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A.F.; Roussin, R.W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  2. Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented

  3. Graphics development of DCOR: Deterministic combat model of Oak Ridge. [Deterministic Combat model of Oak Ridge (DCOR)

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, G. (Georgia Inst. of Tech., Atlanta, GA (United States)); Azmy, Y.Y. (Oak Ridge National Lab., TN (United States))

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  4. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  5. A New Heuristic Providing an Effective Initial Solution for a Simulated Annealing approach to Energy Resource Scheduling in Smart Grids

    DEFF Research Database (Denmark)

    Sousa, Tiago M; Morais, Hugo; Castro, R.

    2014-01-01

    to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach......An intensive use of dispersed energy resources is expected for future power systems, including distributed generation, especially based on renewable sources, and electric vehicles. The system operation methods and tool must be adapted to the increased complexity, especially the optimal resource...... scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...

  6. Order and Chaos in Some Deterministic Infinite Trigonometric Products

    Science.gov (United States)

    Albert, Leif; Kiessling, Michael K.-H.

    2017-08-01

    It is shown that the deterministic infinite trigonometric products \\prod _{n\\in N}[ 1- p +p cos ( style n^{-s}_{_{}}t) ] =: {{ Cl }_{p;s}^{}}(t) with parameters p\\in (0,1] & s>1/2, and variable t\\in R, are inverse Fourier transforms of the probability distributions for certain random series Ω p^ζ (s) taking values in the real ω line; i.e. the {{ Cl }_{p;s}^{}}(t) are characteristic functions of the Ω p^ζ (s). The special case p=1=s yields the familiar random harmonic series, while in general Ω p^ζ (s) is a "random Riemann-ζ function," a notion which will be explained and illustrated—and connected to the Riemann hypothesis. It will be shown that Ω p^ζ (s) is a very regular random variable, having a probability density function (PDF) on the ω line which is a Schwartz function. More precisely, an elementary proof is given that there exists some K_{p;s}^{}>0, and a function F_{p;s}^{}(|t|) bounded by |F_{p;s}^{}(|t|)|!≤ \\exp \\big (K_{p;s}^{} |t|^{1/(s+1)}), and C_{p;s}^{} =-1/s\\int _0^∞ ln |{1-p+p cos ξ }|1/ξ ^{1+1/s}{d}ξ , such that \\forall t\\in R:\\quad {{ Cl }_{p;s}^{}}(t) = \\exp \\bigl ({- C_{p;s}^{} |t|^{1/s}\\bigr )F_{p;s}^{}(|t|)}; the regularity of Ω p^ζ (s) follows. Incidentally, this theorem confirms a surmise by Benoit Cloitre, that ln {{ Cl }_{{{1}/{3}};2}^{}}(t) ˜ -C√{t} ( t→ ∞) for some C>0. Graphical evidence suggests that {{ Cl }_{{{1}/{3}};2}^{}}(t) is an empirically unpredictable (chaotic) function of t. This is reflected in the rich structure of the pertinent PDF (the Fourier transform of {{ Cl }_{{{1}/{3}};2}^{}}), and illustrated by random sampling of the Riemann-ζ walks, whose branching rules allow the build-up of fractal-like structures.

  7. Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects

    Science.gov (United States)

    Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume

    The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in

  8. Deterministic Modeling of the High Temperature Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  9. Shortcuts to adiabaticity for quantum annealing

    Science.gov (United States)

    Takahashi, Kazutaka

    2017-01-01

    We study the Ising Hamiltonian with a transverse field term to simulate the quantum annealing. Using shortcuts to adiabaticity, we design the time dependence of the Hamiltonian. The dynamical invariant is obtained by the mean-field ansatz, and the Hamiltonian is designed by the inverse engineering. We show that the time dependence of physical quantities such as the magnetization is independent of the speed of the Hamiltonian variation in the infinite-range model. We also show that rotating transverse magnetic fields are useful to achieve the ideal time evolution.

  10. Computational Multiqubit Tunnelling in Programmable Quantum Annealers

    Science.gov (United States)

    2016-08-25

    annealing of a 16-qubit problem. Nat. Commun. 4, 1903 (2013). 13. McGeoch, C. C. & Wang, C. in Proceedings of the ACM International Conference on...Computing Frontiers 23 ( ACM , 2013). 14. Dash, S. A note on qubo instances defined on chimera graphs. Preprint at http://arxiv.org/abs/1306.1202 (2013...B. W. in Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing 502–510 (New York, NY, USA, 2004). 33. Somma, R. D., Boixo, S

  11. Binary Sparse Phase Retrieval via Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Wei Peng

    2016-01-01

    Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.

  12. Preparation and Thermal Characterization of Annealed Gold Coated Porous Silicon

    Directory of Open Access Journals (Sweden)

    Afarin Bahrami

    2012-01-01

    Full Text Available Porous silicon (PSi layers were formed on a p-type Si wafer. Six samples were anodised electrically with a 30 mA/cm2 fixed current density for different etching times. The samples were coated with a 50–60 nm gold layer and annealed at different temperatures under Ar flow. The morphology of the layers, before and after annealing, formed by this method was investigated by scanning electron microscopy (SEM. Photoacoustic spectroscopy (PAS measurements were carried out to measure the thermal diffusivity (TD of the PSi and Au/PSi samples. For the Au/PSi samples, the thermal diffusivity was measured before and after annealing to study the effect of annealing. Also to study the aging effect, a comparison was made between freshly annealed samples and samples 30 days after annealing.

  13. Study on thermal annealing of cadmium zinc telluride (CZT) crystals

    Energy Technology Data Exchange (ETDEWEB)

    Yang, G.; Bolotnikov, A.E.; Fochuk, P.M.; Camarda, G.S.; Cui, Y.; Hossain, A.; Kim, K.; Horace, J.; McCall, B.; Gul, R.; Xu, L.; Kopach, O.V.; and James, R.B.

    2010-08-01

    Cadmium Zinc Telluride (CZT) has attracted increasing interest with its promising potential as a room-temperature nuclear-radiation-detector material. However, different defects in CZT crystals, especially Te inclusions and dislocations, can degrade the performance of CZT detectors. Post-growth annealing is a good approach potentially to eliminate the deleterious influence of these defects. At Brookhaven National Laboratory (BNL), we built up different facilities for investigating post-growth annealing of CZT. Here, we report our latest experimental results. Cd-vapor annealing reduces the density of Te inclusions, while large temperature gradient promotes the migration of small-size Te inclusions. Simultaneously, the annealing lowers the density of dislocations. However, only-Cd-vapor annealing decreases the resistivity, possibly reflecting the introduction of extra Cd in the lattice. Subsequent Te-vapor annealing is needed to ensure the recovery of the resistivity after removing the Te inclusions.

  14. Effect of current annealing on electronic properties of multilayer graphene

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, S; Goto, H; Tomori, H; Kanda, A [Institute of Physics, University of Tsukuba, Tsukuba 305-8571 (Japan); Ootuka, Y [Tsukuba Research Center for Interdisciplinary Materials Science (TIMS), University of Tsukuba, Tsukuba 305-8571 (Japan); Tsukagoshi, K, E-mail: tanaka@lt.px.tsukuba.ac.j [MANA, NIMS, Namiki, Tsukuba, Ibaraki 305-0047 (Japan)

    2010-06-01

    While ideal graphene has high mobility due to the relativistic nature of carriers, it is known that the carrier transport in actual graphene samples is dominated by the influence of scattering from charged impurities, which almost conceals the intrinsic splendid properties of this novel material. The common techniques to improve the graphene mobility include the annealing in hydrogen atmosphere and the local annealing by imposing a large biasing current. Although annealing is quite important technique for the experimental study of graphene, detailed evaluation of the annealing effect is lacking at present. In this paper, we study the effect of the current annealing in multilayer graphene devices quantitatively by investigating the change in the mobility and the carrier density at the charge neutrality point. We find that the current annealing sometimes causes degradation of the transport properties.

  15. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    Science.gov (United States)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  16. Annealing effect and stability of carbon nanotubes in hydrogen flame

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Annealing of carbon nanotubes (CNTs) by the hydrogen flame in air was investigated in this study. Raman spectroscopy and scanning electron microscopy were used to characterize the products. The peak width of Raman spectra decreased with the increase in the annealing time. The CNTs were not stable in the hydrogen flame and the etching rate of the CNTs by hydrogen flame was very high. The hydrogen flame annealing had some effects on improving the crystallinity of CNTs.

  17. Annealing-induced Grain Refinement in a Nanostructured Ferritic Steel

    Institute of Scientific and Technical Information of China (English)

    Limin Wang; Zhenbo Wangt; Sheng Guo; Ke Lu

    2012-01-01

    A nanostructured surface layer with a mean ferrite grain size of -8 nm was produced on a Fe-gCr steel by means of surface mechanical attrition treatment. Upon annealing, ferrite grains coarsen with increasing temperature and their sizes increase to -40 nm at 973 K. Further increasing annealing temperature leads to an obvious reduction of ferrite grain sizes, to -14 nm at 1173 K. The annealing-induced grain refinement is analyzed in terms of phase transformations in the nanostructured steel.

  18. Human pose tracking by parametric annealing

    CERN Document Server

    Kaliamoorthi, Prabhu

    2012-01-01

    Model based methods to marker-free motion capture have a very high computational overhead that make them unattractive. In this paper we describe a method that improves on existing global optimization techniques to tracking articulated objects. Our method improves on the state-of-the-art Annealed Particle Filter (APF) by reusing samples across annealing layers and by using an adaptive parametric density for diffusion. We compare the proposed method with APF on a scalable problem and study how the two methods scale with the dimensionality, multi-modality and the range of search. Then we perform sensitivity analysis on the parameters of our algorithm and show that it tolerates a wide range of parameter settings. We also show results on tracking human pose from the widely-used Human Eva I dataset. Our results show that the proposed method reduces the tracking error despite using less than 50% of the computational resources as APF. The tracked output also shows a significant qualitative improvement over APF as dem...

  19. MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    Ladislav Rosocha

    2015-07-01

    Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.

  20. Energy conservation in cupolas and annealing furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Takeno, S.; Kumagaya, M.; Azuma, T.

    1984-01-01

    Successive reductions in the amount of coke and fuel oil used in cupolas and annealing furnaces are reported. In the cupolas, 2% oxygen enrichment resulted in a 0.9% drop in coke ratio and a 13.3% increase in output of pig iron. Coke ratios of 9.3-9.5% were obtained by tuyere blow-in of inexpensive carbon materials instead of expensive coke, by the use of formed coke, and by employing a dehumidified blast. In the case of the fuel oil-fired annealing furnaces, fuel oil consumption rates were reduced by treating two charges per heat instead of one. Energy consumption was successively reduced by 25-71% by 1) adopting a ceramic fibre heat-insulating material, 2) changing to low-oxygen combustion by increasing the number of burners, 3) lengthening the time during which the furnace high-temperature zone is maintained, 4) raising the combustion chamber load by using ceramic fibres in the furnace casing. 3 references.

  1. A coherent quantum annealer with Rydberg atoms

    Science.gov (United States)

    Glaetzle, A. W.; van Bijnen, R. M. W.; Zoller, P.; Lechner, W.

    2017-06-01

    There is a significant ongoing effort in realizing quantum annealing with different physical platforms. The challenge is to achieve a fully programmable quantum device featuring coherent adiabatic quantum dynamics. Here we show that combining the well-developed quantum simulation toolbox for Rydberg atoms with the recently proposed Lechner-Hauke-Zoller (LHZ) architecture allows one to build a prototype for a coherent adiabatic quantum computer with all-to-all Ising interactions and, therefore, a platform for quantum annealing. In LHZ an infinite-range spin-glass is mapped onto the low energy subspace of a spin-1/2 lattice gauge model with quasi-local four-body parity constraints. This spin model can be emulated in a natural way with Rubidium and Caesium atoms in a bipartite optical lattice involving laser-dressed Rydberg-Rydberg interactions, which are several orders of magnitude larger than the relevant decoherence rates. This makes the exploration of coherent quantum enhanced optimization protocols accessible with state-of-the-art atomic physics experiments.

  2. A coherent quantum annealer with Rydberg atoms.

    Science.gov (United States)

    Glaetzle, A W; van Bijnen, R M W; Zoller, P; Lechner, W

    2017-06-22

    There is a significant ongoing effort in realizing quantum annealing with different physical platforms. The challenge is to achieve a fully programmable quantum device featuring coherent adiabatic quantum dynamics. Here we show that combining the well-developed quantum simulation toolbox for Rydberg atoms with the recently proposed Lechner-Hauke-Zoller (LHZ) architecture allows one to build a prototype for a coherent adiabatic quantum computer with all-to-all Ising interactions and, therefore, a platform for quantum annealing. In LHZ an infinite-range spin-glass is mapped onto the low energy subspace of a spin-1/2 lattice gauge model with quasi-local four-body parity constraints. This spin model can be emulated in a natural way with Rubidium and Caesium atoms in a bipartite optical lattice involving laser-dressed Rydberg-Rydberg interactions, which are several orders of magnitude larger than the relevant decoherence rates. This makes the exploration of coherent quantum enhanced optimization protocols accessible with state-of-the-art atomic physics experiments.

  3. Magnetic field annealing for improved creep resistance

    Energy Technology Data Exchange (ETDEWEB)

    Brady, Michael P.; Ludtka, Gail M.; Ludtka, Gerard M.; Muralidharan, Govindarajan; Nicholson, Don M.; Rios, Orlando; Yamamoto, Yukinori

    2015-12-22

    The method provides heat-resistant chromia- or alumina-forming Fe-, Fe(Ni), Ni(Fe), or Ni-based alloys having improved creep resistance. A precursor is provided containing preselected constituents of a chromia- or alumina-forming Fe-, Fe(Ni), Ni(Fe), or Ni-based alloy, at least one of the constituents for forming a nanoscale precipitate MaXb where M is Cr, Nb, Ti, V, Zr, or Hf, individually and in combination, and X is C, N, O, B, individually and in combination, a=1 to 23 and b=1 to 6. The precursor is annealed at a temperature of 1000-1500.degree. C. for 1-48 h in the presence of a magnetic field of at least 5 Tesla to enhance supersaturation of the M.sub.aX.sub.b constituents in the annealed precursor. This forms nanoscale M.sub.aX.sub.b precipitates for improved creep resistance when the alloy is used at service temperatures of 500-1000.degree. C. Alloys having improved creep resistance are also disclosed.

  4. Annealing effects on deuterium retention behavior in damaged tungsten

    Directory of Open Access Journals (Sweden)

    S. Sakurada

    2016-12-01

    Full Text Available Effects of annealing after/under iron (Fe ion irradiation on deuterium (D retention behavior in tungsten (W were studied. The D2 TDS spectra as a function of heating temperature for 0.1dpa damaged W showed that the D retention was clearly decreased as the annealing temperature was increased. In particular, the desorption of D trapped by voids was largely reduced by annealing at 1173K. The TEM observation indicated that the size of dislocation loops was clearly grown, and its density was decreased by the annealing above 573K. After annealing at 1173K, almost all the dislocation loops were recovered. The results of positron annihilation spectroscopy suggested that the density of vacancy-type defects such as voids, was decreased as the annealing temperature was increased, while its size was increased, indicating that the D retention was reduced by the recovery of the voids. Furthermore, it was found that the desorption temperature of D trapped by the voids for damaged W above 0.3dpa was shifted toward higher temperature side. These results lead to a conclusion that the D retention behavior is controlled by defect density. The D retention in the samples annealed during irradiation was less than that annealed after irradiation. This result shows that defects would be quickly annihilated before stabilization by annealing during irradiation.

  5. Structure and magnetism of Ni/Ti multilayers on annealing

    Indian Academy of Sciences (India)

    Surendra Singh; Saibal Basu; P Bhatt

    2008-11-01

    Neutron reflectometry study has been carried out in unpolarized (NR) and polarized (PNR) mode to understand the structure and magnetic properties of alloy formation at the interfaces of Ni/Ti multilayers on annealing. The PNR data from annealed sample shows a noticeable change with respect to the as-deposited sample. These changes are: a prominent shift of the multilayer Bragg peak to a higher angle and a decrease in the intensity of the Bragg peak. The PNR data from annealed sample revealed the formation of magnetically dead alloy layers at the interfaces. Changes in roughness parameters of the interfaces on annealing were also observed in the PNR data.

  6. A Parallel Genetic Simulated Annealing Hybrid Algorithm for Task Scheduling

    Institute of Scientific and Technical Information of China (English)

    SHU Wanneng; ZHENG Shijue

    2006-01-01

    In this paper combined with the advantages of genetic algorithm and simulated annealing, brings forward a parallel genetic simulated annealing hybrid algorithm (PGSAHA) and applied to solve task scheduling problem in grid computing .It first generates a new group of individuals through genetic operation such as reproduction, crossover, mutation, etc, and than simulated anneals independently all the generated individuals respectively.When the temperature in the process of cooling no longer falls, the result is the optimal solution on the whole.From the analysis and experiment result, it is concluded that this algorithm is superior to genetic algorithm and simulated annealing.

  7. Temperature distribution study in flash-annealed amorphous ribbons

    Energy Technology Data Exchange (ETDEWEB)

    Moron, C. E-mail: cmoron@eui.upm.es; Garcia, A.; Carracedo, M.T

    2003-01-01

    Negative magnetrostrictive amorphous ribbons have been locally current annealed with currents from 1 to 8 A and annealing times from 14 ms to 200 s. In order to obtain information about the sample temperature during flash or current annealing, a study of the temperature dispersion during annealing in amorphous ribbons was made. The local temperature variation was obtained by measuring the local intensity of the infrared emission of the sample with a CCD liquid nitrogen cooled camera. A distribution of local temperature has been found in spite of the small dimension of the sample.

  8. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    Science.gov (United States)

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent

  9. Solute Transport in a Heterogeneous Aquifer: A Nonlinear Deterministic Dynamical Analysis

    Science.gov (United States)

    Sivakumar, B.; Harter, T.; Zhang, H.

    2003-04-01

    Stochastic approaches are widely used for modeling and prediction of uncertainty in groundwater flow and transport processes. An important reason for this is our belief that the dynamics of the seemingly complex and highly irregular subsurface processes are essentially random in nature. However, the discovery of nonlinear deterministic dynamical theory has revealed that random-looking behavior could also be the result of simple deterministic mechanisms influenced by only a few nonlinear interdependent variables. The purpose of the present study is to introduce this theory to subsurface solute transport process, in an attempt to investigate the possibility of understanding the transport dynamics in a much simpler, deterministic, manner. To this effect, salt transport process in a heterogeneous aquifer medium is studied. Specifically, time series of arrival time of salt particles are analyzed. These time series are obtained by integrating a geostatistical (transition probability/Markov chain) model with a groundwater flow model (MODFLOW) and a salt transport (Random Walk Particle) model. The (dynamical) behavior of the transport process (nonlinear deterministic or stochastic) is identified using standard statistical techniques (e.g. autocorrelation function, power spectrum) as well as specific nonlinear deterministic dynamical techniques (e.g. phase-space diagram, correlation dimension method). The sensitivity of the salt transport dynamical behavior to the hydrostratigraphic parameters (i.e. number, volume proportions, mean lengths, and juxtapositional tendencies of facies) used in the transition probability/Markov chain model is also studied. The results indicate that the salt transport process may exhibit very simple (i.e. deterministic) to very complex (i.e. stochastic) dynamical behavior, depending upon the above parameters (i.e. characteristics of the aquifer medium). Efforts towards verification and strengthening of the present results and prediction of salt

  10. Product Variant Master as a Means to Handle Variant Design

    DEFF Research Database (Denmark)

    Hildre, Hans Petter; Mortensen, Niels Henrik; Andreasen, Mogens Myrup

    1996-01-01

    The overall time requiered to design a new product variant relies on two factor: how good the methods to design the new variant are and how good these method are supported by computers.It has been estimated that 80% of all design tasks are variational in that the goal of the design is to adapt an...

  11. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    Science.gov (United States)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  12. Variants of windmill nystagmus.

    Science.gov (United States)

    Choi, Kwang-Dong; Shin, Hae Kyung; Kim, Ji-Soo; Kim, Sung-Hee; Choi, Jae-Hwan; Kim, Hyo-Jung; Zee, David S

    2016-07-01

    Windmill nystagmus is characterized by a clock-like rotation of the beating direction of a jerk nystagmus suggesting separate horizontal and vertical oscillators, usually 90° out of phase. We report oculographic characteristics in three patients with variants of windmill nystagmus in whom the common denominator was profound visual loss due to retinal diseases. Two patients showed a clock-like pattern, while in the third, the nystagmus was largely diagonal (in phase or 180° out of phase) but also periodically changed direction by 180°. We hypothesize that windmill nystagmus is a unique manifestation of "eye movements of the blind." It emerges when the central structures, including the cerebellum, that normally keep eye movements calibrated and gaze steady can no longer perform their task, because they are deprived of the retinal image motion that signals a need for adaptive recalibration.

  13. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    Science.gov (United States)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  14. Histone variants and lipid metabolism

    NARCIS (Netherlands)

    Borghesan, Michela; Mazzoccoli, Gianluigi; Sheedfar, Fareeba; Oben, Jude; Pazienza, Valerio; Vinciguerra, Manlio

    2014-01-01

    Within nucleosomes, canonical histones package the genome, but they can be opportunely replaced with histone variants. The incorporation of histone variants into the nucleosome is a chief cellular strategy to regulate transcription and cellular metabolism. In pathological terms, cellular steatosis i

  15. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    The hierarchical network problem is the problem of finding the least cost network, with nodes divided into groups, edges connecting nodes in each groups and groups ordered in a hierarchy. The idea of hierarchical networks comes from telecommunication networks where hierarchies exist. Hierarchical...... networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  16. Thermoelectric properties by high temperature annealing

    Science.gov (United States)

    Ren, Zhifeng (Inventor); Chen, Gang (Inventor); Kumar, Shankar (Inventor); Lee, Hohyun (Inventor)

    2009-01-01

    The present invention generally provides methods of improving thermoelectric properties of alloys by subjecting them to one or more high temperature annealing steps, performed at temperatures at which the alloys exhibit a mixed solid/liquid phase, followed by cooling steps. For example, in one aspect, such a method of the invention can include subjecting an alloy sample to a temperature that is sufficiently elevated to cause partial melting of at least some of the grains. The sample can then be cooled so as to solidify the melted grain portions such that each solidified grain portion exhibits an average chemical composition, characterized by a relative concentration of elements forming the alloy, that is different than that of the remainder of the grain.

  17. Coupled Quantum Fluctuations and Quantum Annealing

    Science.gov (United States)

    Hormozi, Layla; Kerman, Jamie

    We study the relative effectiveness of coupled quantum fluctuations, compared to single spin fluctuations, in the performance of quantum annealing. We focus on problem Hamiltonians resembling the the Sherrington-Kirkpatrick model of Ising spin glass and compare the effectiveness of different types of fluctuations by numerically calculating the relative success probabilities and residual energies in fully-connected spin systems. We find that for a small class of instances coupled fluctuations can provide improvement over single spin fluctuations and analyze the properties of the corresponding class. Disclaimer: This research was funded by ODNI, IARPA via MIT Lincoln Laboratory under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.

  18. Rapid thermal anneal of arsenic implanted silicon

    Energy Technology Data Exchange (ETDEWEB)

    Feygenson, A.

    1985-01-01

    The distribution of arsenic implanted into silicon during rapid thermal anneal (RTA) was investigated. Secondary ion mass spectrometry, Rutherford backscattering spectrometry, and channeling techniques were used for the measurement of the total (chemical) dopant profile. The electrically active dopant profiles were measured with sheet resistance, sheet-resistance maps, spreading resistance and pinch resistors. It was found that arsenic profile after RTA is influenced by many parameters including crystallographic orientation of the sample, temperature gradient, and defect structure in the surface part affected by heavy arsenic implant. A diffusion model based on inhomogeneous medium was examined. Exact solutions of the diffusion equation were obtained for the rectangular and Gaussian initial dopant profiles. Calculated results are compared to the measured profiles. It is concluded that model satisfactory predicts the major features of the arsenic diffusion into silicon during RTA.

  19. Rapid Thermal Anneal of Arsenic Implanted Silicon.

    Science.gov (United States)

    Feygenson, Anatoly

    1985-12-01

    The distribution of arsenic implanted into silicon during rapid thermal anneal (RTA) has been investigated. Secondary ion mass spectrometry (SIMS), Rutherford backscattering spectrometry (RBS) and channeling techniques were used for the measurement of the total (chemical) dopant profile. The electrically active dopant profiles were measured with sheet resistance, sheet resistance maps, spreading resistance, and pinch resistors. It has been found that arsenic profile after RTA is influenced by many parameters including crystallographic orientation of the sample, temperature gradient, and defect structure in the surface part affected by heavy arsenic implant. A diffusion model based on inhomogeneous medium was examined. Exact solutions of the diffusion equation were obtained for the rectangular and Gaussian initial dopant profiles. Calculated results are compared to the measured profiles. It is concluded that model satisfactory predicts the major features of the arsenic diffusion into silicon during RTA.

  20. Study on Multi-stream Heat Exchanger Network Synthesis with Parallel Genetic/Simulated Annealing Algorithm

    Institute of Scientific and Technical Information of China (English)

    魏关锋; 姚平经; LUOXing; ROETZELWilfried

    2004-01-01

    The multi-stream heat exchanger network synthesis (HENS) problem can be formulated as a mixed integer nonlinear programming model according to Yee et al. Its nonconvexity nature leads to existence of more than one optimum and computational difficulty for traditional algorithms to find the global optimum. Compared with deterministic algorithms, evolutionary computation provides a promising approach to tackle this problem. In this paper, a mathematical model of multi-stream heat exchangers network synthesis problem is setup. Different from the assumption of isothermal mixing of stream splits and thus linearity constraints of Yee et al., non-isothermal mixing is supported. As a consequence, nonlinear constraints are resulted and nonconvexity of the objective function is added. To solve the mathematical model, an algorithm named GA/SA (parallel genetic/simulated annealing algorithm) is detailed for application to the multi-stream heat exchanger network synthesis problem. The performance of the proposed approach is demonstrated with three examples and the obtained solutions indicate the presented approach is effective for multi-stream HENS.

  1. Influence of phosphate esters on the annealing properties of starch

    DEFF Research Database (Denmark)

    Wischmann, Bente; Muhrbeck, Per

    1998-01-01

    The effects of annealing on native potato, waxy maize, and phosphorylated waxy maize starches were compared. Phosphorylated waxy maize starch responded to annealing in a manner between that of the naturally phosphorylated potato starch and that of the native waxy maize starch. The gelatinisation ...

  2. Annealing Behavior of Si1-xGex/Si Heterostructures

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The behavior of Si1-xGex/Si heterostructures under different annealing conditions has been studied. It is found that while RTA treatment diminishes the point defects, it introduces the misfit dislocations into Si1-xGex layers at same time. Higher annealing temperature will result in the propagation of misfit dislocations and then the total destruction of the crystal quality.

  3. Parameters Optimization of Low Carbon Low Alloy Steel Annealing Process

    Institute of Scientific and Technical Information of China (English)

    Maoyu ZHAO; Qianwang CHEN

    2013-01-01

    A suitable match of annealing process parameters is critical for obtaining the fine microstructure of material.Low carbon low alloy steel (20CrMnTi) was heated for various durations near Ac temperature to obtain fine pearlite and ferrite grains.Annealing temperature and time were used as independent variables,and material property data were acquired by orthogonal experiment design under intercritical process followed by subcritical annealing process (IPSAP).The weights of plasticity (hardness,yield strength,section shrinkage and elongation) of annealed material were calculated by analytic hierarchy process,and then the process parameters were optimized by the grey theory system.The results observed by SEM images show that microstructure of optimization annealing material are consisted of smaller lamellar pearlites (ferrite-cementite)and refining ferrites which distribute uniformly.Morphologies on tension fracture surface of optimized annealing material indicate that the numbers of dimple fracture show more finer toughness obviously comparing with other annealing materials.Moreover,the yield strength value of optimization annealing material decreases apparently by tensile test.Thus,the new optimized strategy is accurate and feasible.

  4. Remote sensing of atmospheric duct parameters using simulated annealing

    Institute of Scientific and Technical Information of China (English)

    Zhao Xiao-Feng; Huang Si-Xun; Xiang Jie; Shi Wei-Lai

    2011-01-01

    Simulated annealing is one of the robust optimization schemes. Simulated annealing mimics the annealing process of the slow cooling of a heated metal to reach a stable minimum energy state. In this paper,we adopt simulated annealing to study the problem of the remote sensing of atmospheric duct parameters for two different geometries of propagation measurement. One is from a single emitter to an array of radio receivers (vertical measurements),and the other is from the radar clutter returns (horizontal measurements). Basic principles of simulated annealing and its applications to refractivity estimation are introduced. The performance of this method is validated using numerical experiments and field measurements collected at the East China Sea. The retrieved results demonstrate the feasibility of simulated annealing for near real-time atmospheric refractivity estimation. For comparison,the retrievals of the genetic algorithm are also presented. The comparisons indicate that the convergence speed of simulated annealing is faster than that of the genetic algorithm,while the anti-noise ability of the genetic algorithm is better than that of simulated annealing.

  5. Proposition of a full deterministic medium access method for wireless network in a robotic application

    CERN Document Server

    Bossche, Adrien Van Den; Campo, Eric

    2008-01-01

    Today, many network applications require shorter react time. Robotic field is an excellent example of these needs: robot react time has a direct effect on its task's complexity. Here, we propose a full deterministic medium access method for a wireless robotic application. This contribution is based on some low-power wireless personal area networks, like ZigBee standard. Indeed, ZigBee has identified limits with Quality of Service due to non-determinist medium access and probable collisions during medium reservation requests. In this paper, two major improvements are proposed: an efficient polling of the star nodes and a temporal deterministic distribution of peer-to-peer messages. This new MAC protocol with no collision offers some QoS faculties.

  6. Analysis of Photonic Quantum Nodes Based on Deterministic Single-Photon Raman Passage

    CERN Document Server

    Rosenblum, Serge

    2014-01-01

    The long-standing goal of deterministically controlling a single photon using another was recently realized in various experimental settings. Among these, a particularly attractive demonstration relied on deterministic single-photon Raman passage in a three-level Lambda system coupled to a single-mode waveguide. Beyond the ability to control the direction of propagation of one photon by the direction of another photon, this scheme can also perform as a passive quantum memory and a universal quantum gate. Relying on interference, this all-optical, coherent scheme requires no additional control fields, and can therefore form the basis for scalable quantum networks composed of passive quantum nodes that interact with each other only with single photon pulses. Here we present an analytical and numerical study of deterministic single-photon Raman passage, and characterise its limitations and the parameters for optimal operation. Specifically, we study the effect of losses and the presence of multiple excited state...

  7. A single-loop deterministic method for reliability-based design optimization

    Science.gov (United States)

    Li, Fan; Wu, Teresa; Badiru, Adedeji; Hu, Mengqi; Soni, Som

    2013-04-01

    Reliability-based design optimization (RBDO) is a technique used for engineering design when uncertainty is being considered. A typical RBDO problem can be formulated as a stochastic optimization model where the performance of a system is optimized and the reliability requirements are treated as constraints. One major challenge of RBDO research has been the prohibitive computational expenses. In this research, a new approximation approach, termed the single-loop deterministic method for RBDO (SLDM_RBDO), is proposed to reduce the computational effort of RBDO without sacrificing much accuracy. Based on the first order reliability method, the SLDM_RBDO method converts the probabilistic constraints to approximate deterministic constraints so that the RBDO problems can be transformed to deterministic optimization problems in one step. Three comparison experiments are conducted to show the performance of the SLDM_RBDO. In addition, a reliable forearm crutch design is studied to demonstrate the applicability of SLDM_RBDO to a real industry case.

  8. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    Science.gov (United States)

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  9. Deterministic Methods for Filtering, part I: Mean-field Ensemble Kalman Filtering

    CERN Document Server

    Law, Kody J H; Tempone, Raul

    2014-01-01

    This paper provides a proof of convergence of the standard EnKF generalized to non-Gaussian state space models, based on the indistinguishability property of the joint distribution on the ensemble. A density-based deterministic approximation of the mean-field EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for d<2k. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  10. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Directory of Open Access Journals (Sweden)

    James C Schaff

    2016-12-01

    Full Text Available Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  11. Optimization of structures subjected to dynamic load: deterministic and probabilistic methods

    Directory of Open Access Journals (Sweden)

    Élcio Cassimiro Alves

    Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.

  12. Effects of random and deterministic discrete scale invariance on the critical behavior of the Potts model.

    Science.gov (United States)

    Monceau, Pascal

    2012-12-01

    The effects of disorder on the critical behavior of the q-state Potts model in noninteger dimensions are studied by comparison of deterministic and random fractals sharing the same dimensions in the framework of a discrete scale invariance. We carried out intensive Monte Carlo simulations. In the case of a fractal dimension slightly smaller than two d(f) ~/= 1.974636, we give evidence that the disorder structured by discrete scale invariance does not change the first order transition associated with the deterministic case when q = 7. Furthermore the study of the high value q = 14 shows that the transition is a second order one both for deterministic and random scale invariance, but that their behavior belongs to different university classes.

  13. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    2015-01-01

    mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......) factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee–Carter model in various directions.......The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...

  14. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... as a two (or several)-factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee-Carter model in various directions.......The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...

  15. Mechanism of Annealing Softening of Rolled or Forged Tool Steel

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In order to reduce hardness of rolled or forged steels after annealing and improve processability, the diameter and dispersity of carbides were measured by SEM and quantitative metallography. The microstructure of annealed steel was analyzed by TEM. The effects of the factors such as solute atoms, carbides, grain boundary and interphase boundary were studied. The mechanism of annealing softening of steels was analyzed on the examples of steels H13, S5, S7, X45CrNiMo4, which are treated with new technology. The results showed that the softening of H13, S7, S5 is easier obtained by isothermal or slow cooling annealing from slightly below A1, but hardness of X45CrNiMo4 after annealing is reduced effectively by obtaining coarse lamellar pearlite. Economic results can be obtained from good processability.

  16. Quantum annealing with all-to-all connected nonlinear oscillators

    DEFF Research Database (Denmark)

    Puri, Shruti; Andersen, Christian Kraglund; Grimsmo, Arne L.

    2017-01-01

    Quantum annealing aims at solving combinatorial optimization problems mapped to Ising interactions between quantum spins. Here, with the objective of developing a noise-resilient annealer, we propose a paradigm for quantum annealing with a scalable network of two-photon-driven Kerr......-nonlinear resonators. Each resonator encodes an Ising spin in a robust degenerate subspace formed by two coherent states of opposite phases. A fully connected optimization problem is mapped to local fields driving the resonators, which are connected with only local four-body interactions. We describe an adiabatic...... annealing protocol in this system and analyse its performance in the presence of photon loss. Numerical simulations indicate substantial resilience to this noise channel, leading to a high success probability for quantum annealing. Finally, we propose a realistic circuit QED implementation of this promising...

  17. Annealing of ion-implanted GaN

    CERN Document Server

    Burchard, A; Stötzler, A; Weissenborn, R; Deicher, M

    1999-01-01

    $^{111m}$Cd and $^{112}$Cd ions have been implanted into GaN. With photoluminescence spectroscopy and perturbed $\\gamma-\\gamma$-angular correlation spectroscopy (PAC) the reduction of implantation damage and the optical activation of the implants have been observed as a function of annealing temperature using different annealing methods. The use of N$_{2}$ or NH$_{3}$ atmosphere during annealing allows temperatures up to 1323k and 1373 K, respectively, but above 1200 K a strong loss of Cd from the GaN has been observed. Annealing GaN together with elementary Al forms a protective layer on the GaN surface allowing annealing temperatures up to 1570 K for 10 min. (11 refs).

  18. Influence of time of annealing on anneal hardening effect of a cast CuZn alloy

    OpenAIRE

    Nestorović Svetlana; Ivanić Lj.; Marković Desimir

    2003-01-01

    Investigated cast copper alloy containing 8at%Zn of a solute. For comparison parallel specimens made from cast pure copper. Copper and copper alloy were subjected to cold rolling with different a final reduction of 30,50 and 70%. The cold rolled copper and copper alloy samples were isochronally and isothermally annealed up to recrystallization temperature. After that the values of hardness, strength and electrical conductivity were measured and X-ray analysis was performed. These investigatio...

  19. Cellobiohydrolase variants and polynucleotides encoding same

    Energy Technology Data Exchange (ETDEWEB)

    Wogulis, Mark

    2017-04-04

    The present invention relates to variants of a parent cellobiohydrolase II. The present invention also relates to polynucleotides encoding the variants; nucleic acid constructs, vectors, and host cells comprising the polynucleotides; and methods of using the variants.

  20. Investigation of intensity-modulated radiotherapy optimization with gEUD-based objectives by means of simulated annealing.

    Science.gov (United States)

    Hartmann, Matthias; Bogner, Ludwig

    2008-05-01

    Inverse treatment planning of intensity-modulated radiation therapy (IMRT) is complicated by several sources of error, which can cause deviations of optimized plans from the true optimal solution. These errors include the systematic and convergence error, the local minima error, and the optimizer convergence error. We minimize these errors by developing an inverse IMRT treatment planning system with a Monte Carlo based dose engine and a simulated annealing search engine as well as a deterministic search engine. In addition, different generalized equivalent uniform dose (gEUD)-based and hybrid objective functions were implemented and investigated with simulated annealing. By means of a head-and-neck IMRT case we have analyzed the properties of these gEUD-based objective functions, including its search space and the existence of local optima errors. We found evidence that the use of a previously published investigation of a gEUD-based objective function results in an uncommon search space with a golf hole structure. This special search space structure leads to trapping in local minima, making it extremely difficult to identify the true global minimum, even when using stochastic search engines. Moreover, for the same IMRT case several local optima have been detected by comparing the solutions of 100 different trials using a gradient optimization algorithm with the global optimum computed by simulated annealing. We have demonstrated that the hybrid objective function, which includes dose-based objectives for the target and gEUD-based objectives for normal tissue, results in equally good sparing of the critical structures as for the pure gEUD objective function and lower target dose maxima.

  1. Law of Malus and Photon-Photon Correlations A Quasi-Deterministic Analyzer Model

    CERN Document Server

    Dalton, B J

    2001-01-01

    For polarization experiments involving photon counting we introduce a quasi-deterministic eigenstate transition model of the analyzer process. Distributions accumulated one photon at a time, provide a deterministic explanation for the law of Malus. We combine this analyzer model with causal polarization coupling to calculate photon-photon correlations, one photon pair at a time. The calculated correlations exceed the Bell limits and show excellent agreement with the measured correlations of [ A. Aspect, P. Grangier and G. Rogers, Phys. Rev. Lett. 49 91 (1982)]. We discuss why this model exceeds the Bell type limits.

  2. VISCO-ELASTIC SYSTEMS UNDER BOTH DETERMINISTIC AND BOUND RANDOM PARAMETRIC EXCITATION

    Institute of Scientific and Technical Information of China (English)

    徐伟; 戎海武; 方同

    2003-01-01

    The principal resonance of a visco-elastic systems under both deterministic and random parametric excitation was investigated. The method of multiple scales was used to determine the equations of modulation of amplitude and phase. The behavior, stability and bifurcation of steady state response were studied by means of qualitative analysis. The contributions from the visco-elastic force to both damping and stiffness can be taken into account. The effects of damping, detuning, bandwidth, and magnitudes of deterministic and random excitations were analyzed. The theoretical analysis is verified by numerical results.

  3. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  4. Certain variants of multipermutohedron ideals

    Indian Academy of Sciences (India)

    AJAY KUMAR; CHANCHAL KUMAR

    2016-10-01

    Multipermutohedron ideals have rich combinatorial properties. An explicit combinatorial formula for the multigraded Betti numbers of a multipermutohedron ideal and their Alexander duals are known. Also, the dimension of the Artinian quotient of an Alexander dual of a multipermutohedron ideal is the number of generalized parking functions. In this paper, monomial ideals which are certain variants of multipermutohedron ideals are studied. Multigraded Betti numbers of these variant monomial ideals and their Alexander duals are obtained. Further, many interesting combinatorial properties of multipermutohedron ideals are extended to these variant monomial ideals.

  5. INTRODUCCIÓN DE ELEMENTOS DE MEMORIA EN EL MÉTODO SIMULATED ANNEALING PARA RESOLVER PROBLEMAS DE PROGRAMACIÓN MULTIOBJETIVO DE MÁQUINAS PARALELAS INTRODUCTION OF MEMORY ELEMENTS IN SIMULATED ANNEALING METHOD TO SOLVE MULTIOBJECTIVE PARALLEL MACHINE SCHEDULING PROBLEMS

    Directory of Open Access Journals (Sweden)

    Felipe Baesler

    2008-12-01

    Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.

  6. Gene Variants Reduce Opioid Risks

    Science.gov (United States)

    ... Opioids Prescription Drugs & Cold Medicines Steroids (Anabolic) Synthetic Cannabinoids (K2/Spice) Synthetic Cathinones (Bath Salts) Tobacco/Nicotine ... variant of the gene for the μ-opioid receptor (OPRM1) with a decreased risk for addiction to ...

  7. Annealed Scaling for a Charged Polymer

    Science.gov (United States)

    Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.

    2016-03-01

    This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems

  8. Annealed Scaling for a Charged Polymer

    Energy Technology Data Exchange (ETDEWEB)

    Caravenna, F., E-mail: francesco.caravenna@unimib.it [Università degli Studi di Milano-Bicocca, Dipartimento di Matematica e Applicazioni (Italy); Hollander, F. den, E-mail: denholla@math.leidenuniv.nl [Leiden University, Mathematical Institute (Netherlands); Pétrélis, N., E-mail: nicolas.petrelis@univ-nantes.fr [Université de Nantes, Laboratoire de Mathématiques Jean Leray UMR 6629 (France); Poisat, J., E-mail: poisat@ceremade.dauphine.fr [Université Paris-Dauphine, PSL Research University, CEREMADE, UMR 7534 (France)

    2016-03-15

    This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems

  9. Surface Morphology of Annealed Lead Phthalocyanine Thin Films

    Directory of Open Access Journals (Sweden)

    P.Kalugasalam,

    2010-06-01

    Full Text Available The thin films of Lead Phthalocyanine (PbPc on glass substrates were prepared by Vacuum deposition. The thickness of the films was 450 nm. The sample annealed in high vacuum at 373 K temperature. The sample has been analysed by X-ray diffraction, scanning electron microscopy and atomic force microscopy in order to get structural and surface morphology of the PbPc thin film. The formation of XRD patterns of PbPc shows a triclinic grains (T seen along with monoclinic (M forms of PbPc. The sample is annealed at 373 K temperatures; the film shows peaks that assigned to the triclinic phase. SEM and AFM are the best tools to investigate the surface smoothness and to find the grain size of the particles. The grain size is calculated for all films of different thicknesses. The annealed AFM micrograph shows that the surface of the films consists of large holes. The annealed AFM image indicates a smooth surface. It is very clear that the grain size decreases with increase in the annealing temperature. The roughness also decreases with the increase in film annealing temperature. Annealed film leads to the oxidation of the hthalocyanine with oxygen absorbed or diffused. Therefore, the heat is responsible for the increase in film thickness. Since the films expand, it is believed that the porosity is increased.

  10. The changes of ADI structure during high temperature annealing

    Directory of Open Access Journals (Sweden)

    A. Krzyńska

    2010-01-01

    Full Text Available The results of structure investigations of ADI during it was annealing at elevated temperature are presented. Ductile iron austempered at temperature 325oC was then isothermally annealed 360 minutes at temperature 400, 450, 500 and 550oC. The structure investigations showed that annealing at these temperatures caused substantial structure changes and thus essential hardness decrease, which is most useful property of ADI from point of view its practical application. Degradation advance of the structure depends mainly on annealing temperature, less on the time of the heat treatment. It was concluded that high temperature annealing caused precipitation of Fe3C type carbides, which morphology and distribution depend on temperature. In case of 400oC annealing the carbides precipitates inside bainitic ferrite lath in specific crystallographic planes and partly at the grain boundaries. The annealing at the temperature 550oC caused disappearing of characteristic for ADI needle or lath – like morphology, which is replaced with equiaxed grains. In this case Fe3C carbides take the form very fine precipitates with spheroidal geometry.

  11. Grain coarsening mechanism of Cu thin films by rapid annealing

    Energy Technology Data Exchange (ETDEWEB)

    Sasajima, Yasushi, E-mail: sasajima@mx.ibaraki.ac.jp; Kageyama, Junpei; Khoo, Khyoupin; Onuki, Jin

    2010-09-30

    Cu thin films have been produced by an electroplating method using nominal 9N anode and nominal 6N CuSO{sub 4}.5H{sub 2}O electrolyte. Film samples were heat-treated by two procedures: conventional isothermal annealing in hydrogen atmosphere (abbreviated as H{sub 2} annealing) and rapid thermal annealing with an infrared lamp (abbreviated as RTA). After heat treatment, the average grain diameters and the grain orientation distributions were examined by electron backscattering pattern analysis. The RTA samples (400 {sup o}C for 5 min) have a larger average grain diameter, more uniform grain distribution and higher ratio of (111) orientation than H{sub 2} annealed samples (400 {sup o}C for 30 min). This means that RTA can produce films with coarser and more uniformly distributed grains than H{sub 2} annealing within a short time, i.e. only a few minutes. To clarify the grain coarsening mechanism, grain growth by RTA was simulated using the phase field method. The simulated grain diameter reaches its maximum at a heating rate which is the same order as that in the actual RTA experiment. The maximum grain diameter is larger than that obtained by H{sub 2} annealing with the same annealing time at the isothermal stage as in RTA. The distribution of the misorientation was analyzed which led to a proposed grain growth model for the RTA method.

  12. Embrittlement recovery due to annealing of reactor pressure vessel steels

    Energy Technology Data Exchange (ETDEWEB)

    Eason, E.D.; Wright, J.E.; Nelson, E.E. [Modeling and Computing Services, Boulder, CO (United States); Odette, G.R.; Mader, E.V. [Univ. of California, Santa Barbara, CA (United States)

    1996-03-01

    Embrittlement of reactor pressure vessels (RPVs) can be reduced by thermal annealing at temperatures higher than the normal operating conditions. Although such an annealing process has not been applied to any commercial plants in the United States, one US Army reactor, the BR3 plant in Belgium, and several plants in eastern Europe have been successfully annealed. All available Charpy annealing data were collected and analyzed in this project to develop quantitative models for estimating the recovery in 30 ft-lb (41 J) Charpy transition temperature and Charpy upper shelf energy over a range of potential annealing conditions. Pattern recognition, transformation analysis, residual studies, and the current understanding of the mechanisms involved in the annealing process were used to guide the selection of the most sensitive variables and correlating parameters and to determine the optimal functional forms for fitting the data. The resulting models were fitted by nonlinear least squares. The use of advanced tools, the larger data base now available, and insight from surrogate hardness data produced improved models for quantitative evaluation of the effects of annealing. The quality of models fitted in this project was evaluated by considering both the Charpy annealing data used for fitting and the surrogate hardness data base. The standard errors of the resulting recovery models relative to calibration data are comparable to the uncertainty in unirradiated Charpy data. This work also demonstrates that microhardness recovery is a good surrogate for transition temperature shift recovery and that there is a high level of consistency between the observed annealing trends and fundamental models of embrittlement and recovery processes.

  13. From Ordinary Differential Equations to Structural Causal Models: the deterministic case

    NARCIS (Netherlands)

    Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.

    2013-01-01

    We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of Structura

  14. Deterministic slope failure hazard assessment in a model catchment and its replication in neighbourhood terrain

    Directory of Open Access Journals (Sweden)

    Kiran Prasad Acharya

    2016-01-01

    Full Text Available In this work, we prepare and replicate a deterministic slope failure hazard model in small-scale catchments of tertiary sedimentary terrain of Niihama city in western Japan. It is generally difficult to replicate a deterministic model from one catchment to another due to lack of exactly similar geo-mechanical and hydrological parameters. To overcome this problem, discriminant function modelling was done with the deterministic slope failure hazard model and the DEM-based causal factors of slope failure, which yielded an empirical parametric relationship or a discriminant function equation. This parametric relationship was used to predict the slope failure hazard index in a total of 40 target catchments in the study area. From ROC plots, the prediction rate between 0.719–0.814 and 0.704–0.805 was obtained with inventories of September and October slope failures, respectively. This means September slope failures were better predicted than October slope failures by approximately 1%. The results show that the prediction of the slope failure hazard index is possible, even in a small catchment scale, in similar geophysical settings. Moreover, the replication of the deterministic model through discriminant function modelling was found to be successful in predicting typhoon rainfall-induced slope failures with moderate to good accuracy without any use of geo-mechanical and hydrological parameters.

  15. Nonterminals, homomorphisms and codings in different variations of OL-systems. I. Deterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...

  16. Deterministic Price Setting Rules to Guarantee Profitability of Unbundling in the Airline Industry

    NARCIS (Netherlands)

    Van Diepen, G.; Curran, R.

    2011-01-01

    Unbundling the traditional airfare is one of the airline industry’s practices to generate ancillary revenue in its struggle for profitability. However, unbundling might just as well negatively affect profit. In this paper deterministic price setting rules are established to guarantee profitability o

  17. Bounds for right tails of deterministic and stochastic sums of random variables

    NARCIS (Netherlands)

    G. Darkiewicz; G. Deelstra; J. Dhaene; T. Hoedemakers; M. Vanmaele

    2009-01-01

    We investigate lower and upper bounds for right tails (stop-loss premiums) of deterministic and stochastic sums of nonindependent random variables. The bounds are derived using the concepts of comonotonicity, convex order, and conditioning. The performance of the presented approximations is investig

  18. Deterministic Chaos in Open Well-stirred Bray-Liebhafsky Reaction System

    Science.gov (United States)

    Kolar-Anić, Ljiljana; Vukojević, Vladana; Pejić, Nataša; Grozdić, Tomislav; Anić, Slobodan

    2004-12-01

    Dynamics of the Bray-Liebhafsky (BL) oscillatory reaction is analyzed in a Continuously-fed well-Stirred Thank Reactor (CSTR). Deterministic chaos is found under different conditions, when temperature and acidity are chosen as control parameters. Dynamic patterns observed in real experiments are also numerically simulated.

  19. On a two-server finite queuing system with ordered entry and deterministic arrivals

    NARCIS (Netherlands)

    Nawijn, W.M.

    1984-01-01

    Consider a two-server, ordered entry, queuing system with heterogeneous servers and finite waiting rooms in front of the servers. Service times are negative exponentially distributed. The arrival process is deterministic. A matrix solution for the steady state probabilities of the number of

  20. Hybrid stochastic-deterministic calculation of the second-order perturbative contribution of multireference perturbation theory

    Science.gov (United States)

    Garniron, Yann; Scemama, Anthony; Loos, Pierre-François; Caffarel, Michel

    2017-07-01

    A hybrid stochastic-deterministic approach for computing the second-order perturbative contribution E(2) within multireference perturbation theory (MRPT) is presented. The idea at the heart of our hybrid scheme—based on a reformulation of E(2) as a sum of elementary contributions associated with each determinant of the MR wave function—is to split E(2) into a stochastic and a deterministic part. During the simulation, the stochastic part is gradually reduced by dynamically increasing the deterministic part until one reaches the desired accuracy. In sharp contrast with a purely stochastic Monte Carlo scheme where the error decreases indefinitely as t-1/2 (where t is the computational time), the statistical error in our hybrid algorithm displays a polynomial decay ˜t-n with n = 3-4 in the examples considered here. If desired, the calculation can be carried on until the stochastic part entirely vanishes. In that case, the exact result is obtained with no error bar and no noticeable computational overhead compared to the fully deterministic calculation. The method is illustrated on the F2 and Cr2 molecules. Even for the largest case corresponding to the Cr2 molecule treated with the cc-pVQZ basis set, very accurate results are obtained for E(2) for an active space of (28e, 176o) and a MR wave function including up to 2 ×1 07 determinants.

  1. A small-world network derived from the deterministic uniform recursive tree by line graph operation

    Science.gov (United States)

    Hou, Pengfeng; Zhao, Haixing; Mao, Yaping; Wang, Zhao

    2016-03-01

    The deterministic uniform recursive tree ({DURT}) is one of the deterministic versions of the uniform recursive tree ({URT}). Zhang et al (2008 Eur. Phys. J. B 63 507-13) studied the properties of DURT, including its topological characteristics and spectral properties. Although DURT shows a logarithmic scaling with the size of the network, DURT is not a small-world network since its clustering coefficient is zero. Lu et al (2012 Physica A 391 87-92) proposed a deterministic small-world network by adding some edges with a simple rule in each DURT iteration. In this paper, we intoduce a method for constructing a new deterministic small-world network by the line graph operation in each DURT iteration. The line graph operation brings about cliques at each node of the previous given graph, and the resulting line graph possesses larger clustering coefficients. On the other hand, this operation can decrease the diameter at almost one, then giving the analytic solutions to several topological characteristics of the model proposed. Supported by The Ministry of Science and Technology 973 project (No. 2010C B334708); National Science Foundation of China (Nos. 61164005, 11161037, 11101232, 11461054, 11551001); The Ministry of education scholars and innovation team support plan of Yangtze River (No. IRT1068); Qinghai Province Nature Science Foundation Project (Nos. 2012-Z-943, 2014-ZJ-907).

  2. Fast Deterministic Distributed Maximal Independent Set Computation on Growth-Bounded Graphs

    NARCIS (Netherlands)

    Kuhn, Fabian; Moscibroda, Thomas; Nieberg, Tim; Wattenhofer, Roger; Fraigniaud, Pierre

    2005-01-01

    The distributed complexity of computing a maximal independent set in a graph is of both practical and theoretical importance. While there exists an elegant O(log n) time randomized algorithm for general graphs, no deterministic polylogarithmic algorithm is known. In this paper, we study the problem

  3. Deterministic Price Setting Rules to Guarantee Profitability of Unbundling in the Airline Industry

    NARCIS (Netherlands)

    Van Diepen, G.; Curran, R.

    2011-01-01

    Unbundling the traditional airfare is one of the airline industry’s practices to generate ancillary revenue in its struggle for profitability. However, unbundling might just as well negatively affect profit. In this paper deterministic price setting rules are established to guarantee profitability

  4. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods

    Science.gov (United States)

    Akhavan, Azadeh; Vosoughi, Naser

    2015-12-01

    Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.

  5. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    Science.gov (United States)

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  6. Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model

    Science.gov (United States)

    Novkaniza, F.; Ivana, Aldila, D.

    2016-04-01

    Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.

  7. FP/FIFO scheduling: coexistence of deterministic and probabilistic QoS guarantees

    Directory of Open Access Journals (Sweden)

    Pascale Minet

    2007-01-01

    Full Text Available In this paper, we focus on applications having quantitative QoS (Quality of Service requirements on their end-to-end response time (or jitter. We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.

  8. Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves

    DEFF Research Database (Denmark)

    Eldeberky, Y.; Madsen, Per A.

    1999-01-01

    This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary c...

  9. Deterministic-statistical model coupling in a DSS for river-basin management

    NARCIS (Netherlands)

    de Kok, Jean-Luc; Booij, Martijn J.

    2009-01-01

    This paper presents a method for appropriate coupling of deterministic and statistical models. In the decision-support system for the Elbe river, a conceptual rainfall-runoff model is used to obtain the discharge statistics and corresponding average number of flood days, which is a key input

  10. Using EFDD as a Robust Technique for Deterministic Excitation in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    The algorithms used in Operational Modal Analysis assume that the input forces are stochastic in nature. While this is often the case for civil engineering structures, mechanical structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. Th...

  11. On competition in a Stackelberg location-design model with deterministic supplier choice

    NARCIS (Netherlands)

    Hendrix, E.M.T.

    2016-01-01

    We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is formu

  12. Car Accidents in the Deterministic and Nondeterministic Nagel-Schreckenberg Models

    Science.gov (United States)

    Yang, Xian-Qing; Ma, Yu-Qiang

    In this paper, we study further the probability for the occurrence of car accidents in the Nagel-Schreckenberg model. By considering the braking probability, the conditions for car accidents to occur are modified to obtain accurate results. A universal phenomenological theory will also be presented to describe the probability for car accidents to occur in the deterministic and nondeterministic models, respectively.

  13. Composition dependent thermal annealing behaviour of ion tracks in apatite

    Energy Technology Data Exchange (ETDEWEB)

    Nadzri, A., E-mail: allina.nadzri@anu.edu.au [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Schauries, D.; Mota-Santiago, P.; Muradoglu, S. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Trautmann, C. [GSI Helmholtz Centre for Heavy Ion Research, Planckstrasse 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, 64287 Darmstadt (Germany); Gleadow, A.J.W. [School of Earth Science, University of Melbourne, Melbourne, VIC 3010 (Australia); Hawley, A. [Australian Synchrotron, 800 Blackburn Road, Clayton, VIC 3168 (Australia); Kluth, P. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia)

    2016-07-15

    Natural apatite samples with different F/Cl content from a variety of geological locations (Durango, Mexico; Mud Tank, Australia; and Snarum, Norway) were irradiated with swift heavy ions to simulate fission tracks. The annealing kinetics of the resulting ion tracks was investigated using synchrotron-based small-angle X-ray scattering (SAXS) combined with ex situ annealing. The activation energies for track recrystallization were extracted and consistent with previous studies using track-etching, tracks in the chlorine-rich Snarum apatite are more resistant to annealing than in the other compositions.

  14. Excimer laser annealing for low-voltage power MOSFET

    Science.gov (United States)

    Chen, Yi; Okada, Tatsuya; Noguchi, Takashi; Mazzamuto, Fulvio; Huet, Karim

    2016-08-01

    Excimer laser annealing of lumped beam was performed to form the P-base junction for high-performance low-voltage-power MOSFET. An equivalent shallow-junction structure for the P-base junction with a uniform impurity distribution is realized by adopting excimer laser annealing (ELA). The impurity distribution in the P-base junction can be controlled precisely by the irradiated pulse energy density and the number of shots of excimer laser. High impurity activation for the shallow junction has been confirmed in the melted phase. The application of the laser annealing technology in the fabrication process of a practical low-voltage trench gate MOSFET was also examined.

  15. A NEW GENETIC SIMULATED ANNEALING ALGORITHM FOR FLOOD ROUTING MODEL

    Institute of Scientific and Technical Information of China (English)

    KANG Ling; WANG Cheng; JIANG Tie-bing

    2004-01-01

    In this paper, a new approach, the Genetic Simulated Annealing (GSA), was proposed for optimizing the parameters in the Muskingum routing model. By integrating the simulated annealing method into the genetic algorithm, the hybrid method could avoid some troubles of traditional methods, such as arduous trial-and-error procedure, premature convergence in genetic algorithm and search blindness in simulated annealing. The principle and implementing procedure of this algorithm were described. Numerical experiments show that the GSA can adjust the optimization population, prevent premature convergence and seek the global optimal result.Applications to the Nanyunhe River and Qingjiang River show that the proposed approach is of higher forecast accuracy and practicability.

  16. Kriging-approximation simulated annealing algorithm for groundwater modeling

    Science.gov (United States)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  17. Kinetics of the austenite formation during intercritical annealing

    OpenAIRE

    J. Lis; A. Lis

    2008-01-01

    Purpose: of this paper is the effect of the microstructure of the 6Mn16 steel after soft annealing on the kinetics of the austenite formation during next intercritical annealing.Design/methodology/approach: Analytical TEM point analysis with EDAX system attached to Philips CM20 was used to evaluate the concentration of Mn in the microstructure constituents of the multiphase steel,Findings: The increase in soft annealing time from 1-60 hours at 625 °C increases Mn partitioning between ferrite ...

  18. Exploring first-order phase transitions with population annealing

    Science.gov (United States)

    Barash, Lev Yu.; Weigel, Martin; Shchur, Lev N.; Janke, Wolfhard

    2017-03-01

    Population annealing is a hybrid of sequential and Markov chain Monte Carlo methods geared towards the efficient parallel simulation of systems with complex free-energy landscapes. Systems with first-order phase transitions are among the problems in computational physics that are difficult to tackle with standard methods such as local-update simulations in the canonical ensemble, for example with the Metropolis algorithm. It is hence interesting to see whether such transitions can be more easily studied using population annealing. We report here our preliminary observations from population annealing runs for the two-dimensional Potts model with q > 4, where it undergoes a first-order transition.

  19. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    Science.gov (United States)

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  20. An adaptive approach to the physical annealing strategy for simulated annealing

    Science.gov (United States)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  1. Thermal annealing effects on vanadium pentoxide xerogel films

    National Research Council Canada - National Science Library

    G. N. Barbosa; C. F.O. Graeff; H. P. Oliveira

    2005-01-01

    The effect of water molecules on the conductivity and electrochemical properties of vanadium pentoxide xerogel was studied in connection with changes of morphology upon thermal annealing at different temperatures...

  2. Solvent vapor annealing of an insoluble molecular semiconductor

    KAUST Repository

    Amassian, Aram

    2010-01-01

    Solvent vapor annealing has been proposed as a low-cost, highly versatile, and room-temperature alternative to thermal annealing of organic semiconductors and devices. In this article, we investigate the solvent vapor annealing process of a model insoluble molecular semiconductor thin film - pentacene on SiO 2 exposed to acetone vapor - using a combination of optical reflectance and two-dimensional grazing incidence X-ray diffraction measurements performed in situ, during processing. These measurements provide valuable and new insight into the solvent vapor annealing process; they demonstrate that solvent molecules interact mainly with the surface of the film to induce a solid-solid transition without noticeable swelling, dissolving or melting of the molecular material. © 2010 The Royal Society of Chemistry.

  3. Evidence for quantum annealing with more than one hundred qubits

    Science.gov (United States)

    Boixo, Sergio; Rønnow, Troels F.; Isakov, Sergei V.; Wang, Zhihui; Wecker, David; Lidar, Daniel A.; Martinis, John M.; Troyer, Matthias

    2014-03-01

    Quantum technology is maturing to the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators may be built with capabilities exceeding classical computers. A quantum annealer, in particular, solves optimization problems by evolving a known initial configuration at non-zero temperature towards the ground state of a Hamiltonian encoding a given problem. Here, we present results from tests on a 108 qubit D-Wave One device based on superconducting flux qubits. By studying correlations we find that the device performance is inconsistent with classical annealing or that it is governed by classical spin dynamics. In contrast, we find that the device correlates well with simulated quantum annealing. We find further evidence for quantum annealing in the form of small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the device we compare it against optimized classical algorithms.

  4. Improved mapping of the travelling salesman problem for quantum annealing

    Science.gov (United States)

    Troyer, Matthias; Heim, Bettina; Brown, Ethan; Wecker, David

    2015-03-01

    We consider the quantum adiabatic algorithm as applied to the travelling salesman problem (TSP). We introduce a novel mapping of TSP to an Ising spin glass Hamiltonian and compare it to previous known mappings. Through direct perturbative analysis, unitary evolution, and simulated quantum annealing, we show this new mapping to be significantly superior. We discuss how this advantage can translate to actual physical implementations of TSP on quantum annealers.

  5. Structural and magnetic changes on annealing permalloy/copper multilayers

    Energy Technology Data Exchange (ETDEWEB)

    Fulthorpe, B.D.; Hase, T.P.A. E-mail: t.p.a.hase@dur.ac.uk; Tanner, B.K.; Marrows, C.H.; Hickey, B.J

    2001-05-01

    Thin-film powder diffraction and in situ grazing incidence X-ray scattering have been used to determine the structural changes that occur during annealing of permalloy/copper multilayers. We show that the enhanced stability in the magnetotransport properties of multilayers doped with cobalt at the interfaces correlates with reduced interdiffusion. The development of a long correlation length conformal roughness during annealing is observed.

  6. Precise annealing of focal plane arrays for optical detection

    Energy Technology Data Exchange (ETDEWEB)

    Bender, Daniel A.

    2015-09-22

    Precise annealing of identified defective regions of a Focal Plane Array ("FPA") (e.g., exclusive of non-defective regions of the FPA) facilitates removal of defects from an FPA that has been hybridized and/or packaged with readout electronics. Radiation is optionally applied under operating conditions, such as under cryogenic temperatures, such that performance of an FPA can be evaluated before, during, and after annealing without requiring thermal cycling.

  7. A simulated annealing technique for multi-objective simulation optimization

    OpenAIRE

    Mahmoud H. Alrefaei; Diabat, Ali H.

    2009-01-01

    In this paper, we present a simulated annealing algorithm for solving multi-objective simulation optimization problems. The algorithm is based on the idea of simulated annealing with constant temperature, and uses a rule for accepting a candidate solution that depends on the individual estimated objective function values. The algorithm is shown to converge almost surely to an optimal solution. It is applied to a multi-objective inventory problem; the numerical results show that the algorithm ...

  8. Stored energy and annealing behavior of heavily deformed aluminium

    DEFF Research Database (Denmark)

    Kamikawa, Naoya; Huang, Xiaoxu; Kondo, Yuka

    2012-01-01

    followed by 0.5 h annealing at 200-600°C, where the former treatment leads to discontinuous recrystallization and the latter to uniform structural coarsening. This behavior has been analyzed in terms of the relative change during annealing of energy stored as elastic energy in the dislocation structure...... and as boundary energy in the high-angle boundaries. © (2012) Trans Tech Publications, Switzerland....

  9. Population annealing simulations of a binary hard-sphere mixture

    Science.gov (United States)

    Callaham, Jared; Machta, Jonathan

    2017-06-01

    Population annealing is a sequential Monte Carlo scheme well suited to simulating equilibrium states of systems with rough free energy landscapes. Here we use population annealing to study a binary mixture of hard spheres. Population annealing is a parallel version of simulated annealing with an extra resampling step that ensures that a population of replicas of the system represents the equilibrium ensemble at every packing fraction in an annealing schedule. The algorithm and its equilibration properties are described, and results are presented for a glass-forming fluid composed of a 50/50 mixture of hard spheres with diameter ratio of 1.4:1. For this system, we obtain precise results for the equation of state in the glassy regime up to packing fractions φ ≈0.60 and study deviations from the Boublik-Mansoori-Carnahan-Starling-Leland equation of state. For higher packing fractions, the algorithm falls out of equilibrium and a free volume fit predicts jamming at packing fraction φ ≈0.667 . We conclude that population annealing is an effective tool for studying equilibrium glassy fluids and the jamming transition.

  10. Synthesis and characterization of Ar-annealed zinc oxide nanostructures

    Directory of Open Access Journals (Sweden)

    Narayanan Kuthirummal

    2016-09-01

    Full Text Available Nanostructured zinc oxide samples were synthesized through CVD and annealed in argon. The samples were investigated using SEM, TEM, XRD, and UV/VIS/FTIR photoacoustic spectroscopy. The SEM/TEM images show relatively spherical particles that form elongated, connected domains post-anneal. XRD measurements indicate a typical wurtzite structure and reveal an increase in average grain size from 16.3 nm to 21.2 nm in Ar-annealed samples over pristine samples. Visible photoacoustic spectra reveal the contribution of defect levels on the absorption edge of the fundamental gap of zinc oxide. The steepness parameter of the absorption edge, which is inversely proportional to the width of the absorption edge, decreased from 0.1582 (pristine to 0.1539 (annealed for 90 minutes revealing increased density of defect states upon annealing. The FTIR photoacoustic spectra show an intense peak at 412 cm-1 and a shoulder at 504 cm-1 corresponding to the two transverse optical stretching modes of ZnO. These results may indicate a self-assembly mechanism upon anneal under Ar atmosphere leading to early-stage nanorod growth.

  11. A Low Density Microarray Method for the Identification of Human Papillomavirus Type 18 Variants

    Directory of Open Access Journals (Sweden)

    Aracely López-Monteon

    2013-09-01

    Full Text Available We describe a novel microarray based-method for the screening of oncogenic human papillomavirus 18 (HPV-18 molecular variants. Due to the fact that sequencing methodology may underestimate samples containing more than one variant we designed a specific and sensitive stacking DNA hybridization assay. This technology can be used to discriminate between three possible phylogenetic branches of HPV-18. Probes were attached covalently on glass slides and hybridized with single-stranded DNA targets. Prior to hybridization with the probes, the target strands were pre-annealed with the three auxiliary contiguous oligonucleotides flanking the target sequences. Screening HPV-18 positive cell lines and cervical samples were used to evaluate the performance of this HPV DNA microarray. Our results demonstrate that the HPV-18’s variants hybridized specifically to probes, with no detection of unspecific signals. Specific probes successfully reveal detectable point mutations in these variants. The present DNA oligoarray system can be used as a reliable, sensitive and specific method for HPV-18 variant screening. Furthermore, this simple assay allows the use of inexpensive equipment, making it accessible in resource-poor settings.

  12. Rapid hardening induced by electric pulse annealing in nanostructured pure aluminum

    DEFF Research Database (Denmark)

    Zeng, Wei; Shen, Yao; Zhang, Ning

    2012-01-01

    Nanostructured pure aluminum was fabricated by heavy cold-rolling and then subjected to recovery annealing either by applying electric pulse annealing or by traditional air furnace annealing. Both annealing treatments resulted in an increase in yield strength due to the occurrence of a “dislocation...

  13. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  14. Annealing effects on strain and stress sensitivity of polymer optical fibre based sensors

    DEFF Research Database (Denmark)

    Pospori, A.; Marques, C. A. F.; Zubel, M. G.

    2016-01-01

    The annealing effects on strain and stress sensitivity of polymer optical fibre Bragg grating sensors after their photoinscription are investigated. PMMA optical fibre based Bragg grating sensors are first photo-inscribed and then they were placed into hot water for annealing. Strain, stress...... and force sensitivity measurements are taken before and after annealing. Parameters such as annealing time and annealing temperature are investigated. The change of the fibre diameter due to water absorption and the annealing process is also considered. The results show that annealing the polymer optical...... fibre tends to increase the strain, stress and force sensitivity of the photo-inscribed sensor....

  15. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  16. Deterministic Coherent Writing of a Long-Lived Semiconductor Spin Qubit Using One Ultrafast Optical Pulse

    CERN Document Server

    Schwartz, I; Schmidgall, E R; Gantz, L; Don, Y; Zielinski, M; Gershoni, D

    2015-01-01

    We use one single, few-picosecond-long, variably polarized laser pulse to deterministically write any selected spin state of a quantum dot confined dark exciton whose life and coherence time are six and five orders of magnitude longer than the laser pulse duration, respectively. The pulse is tuned to an absorption resonance of an excited dark exciton state, which acquires non-negligible oscillator strength due to residual mixing with bright exciton states. We obtain a high fidelity one-to-one mapping from any point on the Poincar\\'e sphere of the pulse polarization to a corresponding point on the Bloch sphere of the spin of the deterministically photogenerated dark exciton.

  17. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods

    Energy Technology Data Exchange (ETDEWEB)

    Kucza, Witold, E-mail: witek@agh.edu.pl

    2013-07-25

    Graphical abstract: -- Highlights: •Former random walk approach for FIA simulations has been improved. •Random walk and uniform dispersion models have been used for FIA simulations. •Diffusivities have been optimized by genetic and the Levenberg–Marquardt methods. •Both approaches have given similar results in agreement with experimental ones. -- Abstract: Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg–Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches.

  18. Spirally polarized beams for polarimetry measurements of deterministic and homogeneous samples

    Science.gov (United States)

    de Sande, Juan Carlos González; Santarsiero, Massimo; Piquero, Gemma

    2017-04-01

    The use of spirally polarized beams (SPBs) in polarimetric measurements of homogeneous and deterministic samples is proposed. Since across any transverse plane such beams present all possible linearly polarized states at once, the complete Mueller matrix of deterministic samples can be recovered with a reduced number of measurements and small errors. Furthermore, SPBs present the same polarization pattern across any transverse plane during propagation, and the same happens for the field propagated after the sample, so that both the sample plane and the plane where the polarization of the field is measured can be chosen at will. Experimental results are presented for the particular case of an azimuthally polarized beam and samples consisting of rotated retardation plates and linear polarizers.

  19. Analysis of deterministic and statistical approaches to fatigue crack growth in pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Alexandre S.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear. E-mail: frutuoso@lmn.con.ufrj.br

    2000-07-01

    This work presents three approaches to the fatigue crack growth process in steel pressure vessels as applied to failure probability calculation. In the Thomson's methodology, the crack growth is the term that represents the mechanical behavior which along the time will take the pressure vessel to a structural failure. The first result of failure probability will be obtained considering a deterministic approach, since the crack growth laws are of a deterministic nature. This approach will provide a reference value. Next, two statistical approaches will be performed based on the fact that fatigue crack growth is a random phenomenon. One of them takes into account only the variability of experimental data, proposing a distribution function to represent the failure process. The other, the stochastic approach, considers the random nature of crack growth along time, by performing the randomization of a crack growth law. The solution of this stochastic equation is a transition distribution function fitted to experimental data. (author)

  20. Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L., E-mail: ehu@seas.harvard.edu [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States); Bracher, David O. [Department of Physics, Harvard University, Cambridge, Massachusetts 02138 (United States); Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania [Department of Physics, University of California, Santa Barbara, Santa Barbara, California 93106 (United States); Andrich, Paolo; Awschalom, David [Department of Physics, University of California, Santa Barbara, Santa Barbara, California 93106 (United States); Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Aharonovich, Igor [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States); School of Physics and Advanced Materials, University of Technology Sydney, Ultimo, New South Wales 2007 (Australia)

    2014-12-29

    The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.

  1. Deterministic schedules for robust and reproducible non-uniform sampling in multidimensional NMR.

    Science.gov (United States)

    Eddy, Matthew T; Ruben, David; Griffin, Robert G; Herzfeld, Judith

    2012-01-01

    We show that a simple, general, and easily reproducible method for generating non-uniform sampling (NUS) schedules preserves the benefits of random sampling, including inherently reduced sampling artifacts, while removing the pitfalls associated with choosing an arbitrary seed. Sampling schedules are generated from a discrete cumulative distribution function (CDF) that closely fits the continuous CDF of the desired probability density function. We compare random and deterministic sampling using a Gaussian probability density function applied to 2D HSQC spectra. Data are processed using the previously published method of Spectroscopy by Integration of Frequency and Time domain data (SIFT). NUS spectra from deterministic sampling schedules were found to be at least as good as those from random schedules at the SIFT critical sampling density, and significantly better at half that sampling density. The method can be applied to any probability density function and generalized to greater than two dimensions. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    Science.gov (United States)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  3. A Deterministic Equivalent Approach to the Performance Analysis of Isometric Random Precoded Systems

    CERN Document Server

    Couillet, Romain; Debbah, Merouane

    2010-01-01

    In this work, a general wireless channel model for different types of code-division multiple access (CDMA) and space-division multiple-access (SDMA) systems with isometric random signature or precoding matrices over frequency-selective and flat fading channels is considered. For such models, deterministic approximations of the mutual information and the signal-to-interference-plus-noise ratio (SINR) at the output of the minimum-mean-square-error (MMSE) receiver are derived. Also, a simple fixed-point algorithm for their computation is provided, which is proved to converge. The deterministic approximations are asymptotically exact, almost surely, but shown by simulations to be very accurate even for small system dimensions. Our analysis is based on the Stieltjes transform method which enables the derivation of spectral limits of the large dimensional random matrices under study but requires neither arguments from free probability theory nor the asymptotic freeness or the convergence of the spectral distributio...

  4. Deterministic strain-induced arrays of quantum emitters in a two-dimensional semiconductor

    Science.gov (United States)

    Branny, Artur; Kumar, Santosh; Proux, Raphaël; Gerardot, Brian D

    2017-01-01

    An outstanding challenge in quantum photonics is scalability, which requires positioning of single quantum emitters in a deterministic fashion. Site positioning progress has been made in established platforms including defects in diamond and self-assembled quantum dots, albeit often with compromised coherence and optical quality. The emergence of single quantum emitters in layered transition metal dichalcogenide semiconductors offers new opportunities to construct a scalable quantum architecture. Here, using nanoscale strain engineering, we deterministically achieve a two-dimensional lattice of quantum emitters in an atomically thin semiconductor. We create point-like strain perturbations in mono- and bi-layer WSe2 which locally modify the band-gap, leading to efficient funnelling of excitons towards isolated strain-tuned quantum emitters that exhibit high-purity single photon emission. We achieve near unity emitter creation probability and a mean positioning accuracy of 120±32 nm, which may be improved with further optimization of the nanopillar dimensions. PMID:28530219

  5. Deterministic continutation of stochastic metastable equilibria via Lyapunov equations and ellipsoids

    CERN Document Server

    Kuehn, Christian

    2011-01-01

    Numerical continuation methods for deterministic dynamical systems have been one most the successful tools in applied dynamical systems theory. Continuation techniques have been employed in all branches of the natural sciences as well as in engineering to analyze ordinary, partial and delay differential equations. Here we show that the deterministic continuation algorithm for equilibrium points can be extended easily to also track information about metastable equilibrium points of stochastic differential equations (SDEs). We stress that we do not develop a new technical tool but that we combine results and methods from probability theory, dynamical systems, numerical analysis, optimization and control theory into an algorithm that augments classical equilibrium continuation methods. In particular, we use ellipsoids defining regions of high concentration of sample paths. It is shown that these ellipsoids and the distances between them can be efficiently calculated using iterative methods that take advantage of...

  6. Stochastic Simulation of Integrated Circuits with Nonlinear Black-Box Components via Augmented Deterministic Equivalents

    Directory of Open Access Journals (Sweden)

    MANFREDI, P.

    2014-11-01

    Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.

  7. Deterministic Bidirectional Remote State Preparation of a- and Symmetric Quantum States with a Proper Quantum Channel

    Science.gov (United States)

    Song, Yi; Ni, Jiang-Li; Wang, Zhang-Yin; Lu, Yan; Han, Lian-Fang

    2017-10-01

    We present a new scheme for deterministically realizing the mutual interchange of quantum information between two distant parties via selected quantum states as the shared entangled resource. We first show the symmetric bidirectional remote state preparation (BRSP), where two single-qubit quantum states will be simultaneously exchanged in a deterministic manner provided that each of the users performs single-qubit von Neumann measurements with proper measurement bases as well as appropriate unitary operations, depending essentially on the outcomes of the prior measurements. Then we consider to extend the symmetric protocol to an asymmetric case, in which BRSP of a general single-qubit state and an arbitrary two-qubit state is investigated successfully. The necessary quantum operations and the employed quantum resources are feasible according to the present technology, resulting in that this protocol may be realizable in the realm of current physical experiment.

  8. Scaling of weighted spectral distribution in deterministic scale-free networks

    Science.gov (United States)

    Jiao, Bo; Nie, Yuan-ping; Shi, Jian-mai; Huang, Cheng-dong; Zhou, Ying; Du, Jing; Guo, Rong-hua; Tao, Ye-rong

    2016-06-01

    Scale-free networks are abundant in the real world. In this paper, we investigate the scaling properties of the weighted spectral distribution in several deterministic and stochastic models of evolving scale-free networks. First, we construct a new deterministic scale-free model whose node degrees have a unified format. Using graph structure features, we derive a precise formula for the spectral metric in this model. This formula verifies that the spectral metric grows sublinearly as network size (i.e., the number of nodes) grows. Additionally, the mathematical reasoning of the precise formula theoretically provides detailed explanations for this scaling property. Finally, we validate the scaling properties of the spectral metric using some stochastic models. The experimental results show that this scaling property can be retained regardless of local world, node deleting and assortativity adjustment.

  9. Deterministic strain-induced arrays of quantum emitters in a two-dimensional semiconductor

    Science.gov (United States)

    Branny, Artur; Kumar, Santosh; Proux, Raphaël; Gerardot, Brian D.

    2017-05-01

    An outstanding challenge in quantum photonics is scalability, which requires positioning of single quantum emitters in a deterministic fashion. Site positioning progress has been made in established platforms including defects in diamond and self-assembled quantum dots, albeit often with compromised coherence and optical quality. The emergence of single quantum emitters in layered transition metal dichalcogenide semiconductors offers new opportunities to construct a scalable quantum architecture. Here, using nanoscale strain engineering, we deterministically achieve a two-dimensional lattice of quantum emitters in an atomically thin semiconductor. We create point-like strain perturbations in mono- and bi-layer WSe2 which locally modify the band-gap, leading to efficient funnelling of excitons towards isolated strain-tuned quantum emitters that exhibit high-purity single photon emission. We achieve near unity emitter creation probability and a mean positioning accuracy of 120+/-32 nm, which may be improved with further optimization of the nanopillar dimensions.

  10. Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation

    CERN Document Server

    Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F

    2002-01-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...

  11. Identification of the FitzHugh-Nagumo Model Dynamics via Deterministic Learning

    Science.gov (United States)

    Dong, Xunde; Wang, Cong

    In this paper, a new method is proposed for the identification of the FitzHugh-Nagumo (FHN) model dynamics via deterministic learning. The FHN model is a classic and simple model for studying spiral waves in excitable media, such as the cardiac tissue, biological neural networks. Firstly, the FHN model described by partial differential equations (PDEs) is transformed into a set of ordinary differential equations (ODEs) by using finite difference method. Secondly, the dynamics of the ODEs is identified using the deterministic learning theory. It is shown that, for the spiral waves generated by the FHN model, the dynamics underlying the recurrent trajectory corresponding to any spatial point can be accurately identified by using the proposed approach. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  12. Decision Making Agent Searching for Markov Models in Near-Deterministic World

    CERN Document Server

    Matuz, Gabor

    2011-01-01

    Reinforcement learning has solid foundations, but becomes inefficient in partially observed (non-Markovian) environments. Thus, a learning agent -born with a representation and a policy- might wish to investigate to what extent the Markov property holds. We propose a learning architecture that utilizes combinatorial policy optimization to overcome non-Markovity and to develop efficient behaviors, which are easy to inherit, tests the Markov property of the behavioral states, and corrects against non-Markovity by running a deterministic factored Finite State Model, which can be learned. We illustrate the properties of architecture in the near deterministic Ms. Pac-Man game. We analyze the architecture from the point of view of evolutionary, individual, and social learning.

  13. Robust state estimation for uncertain linear systems with deterministic input signals

    Institute of Scientific and Technical Information of China (English)

    Huabo LIU; Tong ZHOU

    2014-01-01

    In this paper, we investigate state estimations of a dynamical system in which not only process and measurement noise, but also parameter uncertainties and deterministic input signals are involved. The sensitivity penalization based robust state estimation is extended to uncertain linear systems with deterministic input signals and parametric uncertainties which may nonlinearly affect a state-space plant model. The form of the derived robust estimator is similar to that of the well-known Kalman filter with a comparable computational complexity. Under a few weak assumptions, it is proved that though the derived state estimator is biased, the bound of estimation errors is finite and the covariance matrix of estimation errors is bounded. Numerical simulations show that the obtained robust filter has relatively nice estimation performances.

  14. Ag-dependent (in silico) approach implies a deterministic kinetics for homeostatic memory cell turnover

    CERN Document Server

    de Castro, Alexandre; Herai, Roberto

    2011-01-01

    Verhulst-like mathematical modeling has been used to investigate several complex biological issues, such as immune memory equilibrium and cell-mediated immunity in mammals. The regulation mechanisms of both these processes are still not sufficiently understood. In a recent paper, Choo et al. [J. Immunol., v. 185, pp. 3436-44, 2010], used an Ag-independent approach to quantitatively analyze memory cell turnover from some empirical data, and concluded that immune homeostasis behaves stochastically, rather than deterministically. In the paper here presented, we use an in silico Ag-dependent approach to simulate the process of antigenic mutation and study its implications for memory dynamics. Our results have suggested a deterministic kinetics for homeostatic equilibrium, what contradicts the Choo et al. findings. Accordingly, our calculations are an indication that a more extensive empirical protocol for studying the homeostatic turnover should be considered.

  15. How the growth rate of host cells affects cancer risk in a deterministic way

    Science.gov (United States)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  16. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    CERN Document Server

    Asinari, Pietro

    2010-01-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both ...

  17. Note: Improving long-term stability of hot-wire anemometer sensors by means of annealing

    Science.gov (United States)

    Lundström, H.

    2015-08-01

    Annealing procedures for hot-wire sensors of platinum and platinum-plated tungsten have been investigated experimentally. It was discovered that the two investigated sensor metals behave quite differently during the annealing process, but for both types annealing may improve long-term stability considerably. Measured drift of sensors both without and with prior annealing is presented. Suggestions for suitable annealing temperatures and times are given.

  18. Note: Improving long-term stability of hot-wire anemometer sensors by means of annealing.

    Science.gov (United States)

    Lundström, H

    2015-08-01

    Annealing procedures for hot-wire sensors of platinum and platinum-plated tungsten have been investigated experimentally. It was discovered that the two investigated sensor metals behave quite differently during the annealing process, but for both types annealing may improve long-term stability considerably. Measured drift of sensors both without and with prior annealing is presented. Suggestions for suitable annealing temperatures and times are given.

  19. Note: Improving long-term stability of hot-wire anemometer sensors by means of annealing

    Energy Technology Data Exchange (ETDEWEB)

    Lundström, H., E-mail: hans.lundstrom@hig.se [Department of Building, Energy and Environmental Engineering, University of Gävle, SE-801 76 Gävle (Sweden)

    2015-08-15

    Annealing procedures for hot-wire sensors of platinum and platinum-plated tungsten have been investigated experimentally. It was discovered that the two investigated sensor metals behave quite differently during the annealing process, but for both types annealing may improve long-term stability considerably. Measured drift of sensors both without and with prior annealing is presented. Suggestions for suitable annealing temperatures and times are given.

  20. Annealing behaviors of vacancy in varied neutron irradiated Czochralski silicon

    Institute of Scientific and Technical Information of China (English)

    CHEN Gui-feng; LI Yang-xian; LIU Li-li; NIU Ping-juan; NIU Sheng-li; CHEN Dong-feng

    2006-01-01

    The difference of annealing behaviors of vacancy-oxygen complex (VO) in varied dose neutron irradiated Czochralski silicon: (S1 5×1017 n/cm3 and S2 1.07×1019 n/cm3) were studied. The results show that the VO is one of the main defects formed in neutron irradiated Czochralski silicon (CZ-Si). In this defect,oxygen atom shares a vacancy,it is bonded to two silicon neighbors. Annealed at 200 ℃,divacancies are trapped by interstitial oxygen(Oi) to form V2O (840 cm-1). With the decrease of the 829 cm-1 (VO) three infrared absorption bands at 825 cm-1 (V2O2),834 cm-1 (V2O3) and 840 cm-1 (V2O) will rise after annealed at temperature range of 200-500 ℃. After annealed at 450-500 ℃ the main absorption bands in S1 sample are 834 cm-1,825 cm-1 and 889 cm-1 (VO2),in S2 is 825 cm-1. Annealing of A-center in varied neutron irradiated CZ-Si is suggested to consist of two processes. The first is due to trapping of VO by Oi in low dose neutron irradiated CZ-Si (S1) and the second is due to capture the wandering vacancy by VO,etc,in high dose neutron irradiated CZ-Si (S2),the VO2 plays an important role in the annealing of A-center. With the increase of the irradiation dose,the annealing behavior of A-center is changed.

  1. Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics

    Science.gov (United States)

    Terlizzi, Stefano; Rahnema, Farzad; Zhang, Dingkang; Dulla, Sandra; Ravetto, Piero

    2015-12-01

    A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.

  2. Occurrence of HIV eradication for preexposure prophylaxis treatment with a deterministic HIV model

    OpenAIRE

    Chang, H.; Moog, C; Astolfi, A

    2016-01-01

    The authors examine the human immunodeficiency virus (HIV) eradication in this study using a mathematical model and analyse the occurrence of virus eradication during the early stage of infection. To this end they use a deterministic HIV-infection model, modify it to describe the pharmacological dynamics of antiretroviral HIV drugs, and consider the clinical experimental results of preexposure prophylaxis HIV treatment. They also use numerical simulation to model the experimental scenario, th...

  3. A combined deterministic and probabilistic procedure for safety assessment of components with cracks - Handbook.

    Energy Technology Data Exchange (ETDEWEB)

    Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))

    2008-07-01

    SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants

  4. Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Terlizzi, Stefano; Dulla, Sandra; Ravetto, Piero [Politecnico di Torino, Corso Duca degli Abruzzi, 24 10129, Torino (Italy); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear & Radiological Engineering and Medical Physics Programs, Georgia Institute of Technology, 770 State Street NW, Atlanta, Ga, 30332-0745 (United States); Nuclear & Radiological Engineering and Medical Physics Programs, Georgia Institute of Technology, 770 State Street NW, Atlanta, Ga, 30332-0745 (United States); Zhang, Dingkang [Nuclear & Radiological Engineering and Medical Physics Programs, Georgia Institute of Technology, 770 State Street NW, Atlanta, Ga, 30332-0745 (United States)

    2015-12-31

    A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.

  5. PRINCIPAL RESPONSE OF VAN DER POL-DUFFING OSCILLATOR UNDER COMBINED DETERMINISTIC AND RANDOM PARAMETRIC EXCITATION

    Institute of Scientific and Technical Information of China (English)

    戎海武; 徐伟; 王向东; 孟光; 方同

    2002-01-01

    The principal resonance of Van der Pol-Duffing oscillator to combined determin istic and random parametric excitations is investigated. The method of multiple scales was used to determine the equations of modulation of amplitude and phase. The behavior, stability and bifurcation of steady state response were studied. Jumps were shown to occur under some conditions. The effects of damping, detuning , bandwidth, and magnitudes of deterministic and random excitations are analyzed. The theoretical analysis were verified by numerical results.

  6. Deterministic fabrication of dielectric loaded waveguides coupled to single nitrogen vacancy centers in nanodiamonds

    DEFF Research Database (Denmark)

    Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.

    We report on the fabrication of dielectric-loaded-waveguides which are excited by single-nitrogen-vacancy (NV) centers in nanodiamonds. The waveguides are deterministically written onto the pre-characterized nanodiamonds by using electron beam lithography of hydrogen silsesquioxane (HSQ) resist...... on silver-coated silicon substrate. Change in lifetime for NV-centers is observed after fabrication of waveguides and an antibunching in correlation measurement confirms that nanodiamonds contain single NV-centers....

  7. Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis

    Directory of Open Access Journals (Sweden)

    Gisele Pinto de Oliveira

    2016-01-01

    Full Text Available ABSTRACT OBJECTIVE To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. METHODS The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring. Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System – Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. RESULTS Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. CONCLUSIONS The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used.

  8. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T; Finlay, J; Mesina, C; Liu, H [University Pennsylvania, Philadelphia, PA (United States)

    2014-06-01

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.

  9. An Efficient Deterministic Quantum Algorithm for the Integer Square-free Decomposition Problem

    OpenAIRE

    Li, Jun; Peng, Xinhua; Du, Jiangfeng; Suter, Dieter

    2011-01-01

    Quantum computers are known to be qualitatively more powerful than classical computers, but so far only a small number of different algorithms have been discovered that actually use this potential. It would therefore be highly desirable to develop other types of quantum algorithms that widen the range of possible applications. Here we propose an efficient and deterministic quantum algorithm for finding the square-free part of a large integer - a problem for which no efficient classical algori...

  10. Internal Structure of Elementary Particle and Possible Deterministic Mechanism of Biological Evolution

    Directory of Open Access Journals (Sweden)

    Alexei V. Melkikh

    2004-03-01

    Full Text Available The possibility of a complicated internal structure of an elementary particle was analyzed. In this case a particle may represent a quantum computer with many degrees of freedom. It was shown that the probability of new species formation by means of random mutations is negligibly small. Deterministic model of evolution is considered. According to this model DNA nucleotides can change their state under the control of elementary particle internal degrees of freedom.

  11. Theory and applications of a deterministic approximation to the coalescent model.

    Science.gov (United States)

    Jewett, Ethan M; Rosenberg, Noah A

    2014-05-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the resulting approximate formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt≈E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt≈E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios.

  12. A deterministic polynomial time algorithm for non-commutative rational identity testing with applications

    OpenAIRE

    2015-01-01

    In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over $\\mathbb{Q}$ is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is ...

  13. Deterministic Schedules for Robust and Reproducible Non-uniform Sampling in Multidimensional NMR

    OpenAIRE

    Eddy, Matthew T.; Ruben, David; Griffin, Robert G.; Herzfeld, Judith

    2011-01-01

    We show that a simple, general, and easily reproducible method for generating non-uniform sampling (NUS) schedules preserves the benefits of random sampling, including inherently reduced sampling artifacts, while removing the pitfalls associated with choosing an arbitrary seed. Sampling schedules are generated from a discrete cumulative distribution function (CDF) that closely fits the continuous CDF of the desired probability density function. We compare random and deterministic sampling usi...

  14. Deterministic quantum key distribution based on Gaussian-modulated EPR correlations

    Institute of Scientific and Technical Information of China (English)

    He Guang-Qiang; Zeng Gui-Hua

    2006-01-01

    This paper proposes a deterministic quantum key distribution scheme based on Gaussian-modulated continuous variable EPR correlations. This scheme can implement fast and efficient key distribution. The security is guaranteed by continuous variable EPR entanglement correlations produced by nondegenerate optical parametric amplifier. For general beam splitter eavesdropping strategy, the secret information rate△I = I(α,β) - I(α,ε) is calculated in view of Shannon information theory. Finally the security analysis is presented.

  15. Deterministic and stochastic control of chimera states in delayed feedback oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)

    2016-06-08

    Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.

  16. Deterministic Performance Assessment and Retuning of Industrial Controllers Based on Routine Operating Data: Applications

    Directory of Open Access Journals (Sweden)

    Massimiliano Veronesi

    2015-02-01

    Full Text Available Performance assessment and retuning techniques for proportional-integral-derivative (PID controllers are reviewed in this paper. In particular, we focus on techniques that consider deterministic performance and that use routine operating data (that is, set-point and load disturbance step signals. Simulation and experimental results show that the use of integrals of predefined signals can be effectively employed for the estimation of the process parameters and, therefore, for the comparison of the current controller with a selected benchmark.

  17. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan-Mariano de Goyeneche

    2009-05-01

    Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.

  18. Ground motion following selection of SRS design basis earthquake and associated deterministic approach

    Energy Technology Data Exchange (ETDEWEB)

    1991-03-01

    This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart.

  19. AN HYBRID STOCHASTIC-DETERMINISTIC OPTIMIZATION ALGORITHM FOR STRUCTURAL DAMAGE IDENTIFICATION

    OpenAIRE

    Nhamage, Idilson António; Lopez, Rafael Holdorf; Miguel, Leandro Fleck Fadel; Miguel, Letícia Fleck Fadel; Torii, André Jacomel

    2017-01-01

    Abstract. This paper presents a hybrid stochastic/deterministic optimization algorithm to solve the target optimization problem of vibration-based damage detection. The use of a numerical solution of the representation formula to locate the region of the global solution, i.e., to provide a starting point for the local optimizer, which is chosen to be the Nelder-Mead algorithm (NMA), is proposed. A series of numerical examples with different damage scenarios and noise levels was performed unde...

  20. Deterministic Agent-Based Path Optimization by Mimicking the Spreading of Ripples.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Di Paolo, Ezequiel A; Liu, Hao

    2016-01-01

    Inspirations from nature have contributed fundamentally to the development of evolutionary computation. Learning from the natural ripple-spreading phenomenon, this article proposes a novel ripple-spreading algorithm (RSA) for the path optimization problem (POP). In nature, a ripple spreads at a constant speed in all directions, and the node closest to the source is the first to be reached. This very simple principle forms the foundation of the proposed RSA. In contrast to most deterministic top-down centralized path optimization methods, such as Dijkstra's algorithm, the RSA is a bottom-up decentralized agent-based simulation model. Moreover, it is distinguished from other agent-based algorithms, such as genetic algorithms and ant colony optimization, by being a deterministic method that can always guarantee the global optimal solution with very good scalability. Here, the RSA is specifically applied to four different POPs. The comparative simulation results illustrate the advantages of the RSA in terms of effectiveness and efficiency. Thanks to the agent-based and deterministic features, the RSA opens new opportunities to attack some problems, such as calculating the exact complete Pareto front in multiobjective optimization and determining the kth shortest project time in project management, which are very difficult, if not impossible, for existing methods to resolve. The ripple-spreading optimization principle and the new distinguishing features and capacities of the RSA enrich the theoretical foundations of evolutionary computation.