Inferring hierarchical clustering structures by deterministic annealing
International Nuclear Information System (INIS)
Hofmann, T.; Buhmann, J.M.
1996-01-01
The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Analysis of Trivium by a Simulated Annealing variant
DEFF Research Database (Denmark)
Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian
2010-01-01
This paper proposes a new method of solving certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations....... A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...
Energy Technology Data Exchange (ETDEWEB)
Sabat, R.K. [Department of Materials Engineering, IISc, Bangalore 560012 (India); Panda, D. [Department of Metallurgical & Materials Engineering, NIT, Rourkela 769008 (India); Sahoo, S.K., E-mail: sursahoo@gmail.com [Department of Metallurgical & Materials Engineering, NIT, Rourkela 769008 (India)
2017-04-15
Pure magnesium was subjected to plastic deformation through CSM (continuous stiffness measurement) indentation followed by annealing at 200 °C for 30 min. Nucleation of no new grains was observed neither at the twin–twin intersections nor at the multiple twin variants of a grain after annealing. Significant growth of off-basal twin orientation compared to basal twin orientation was observed in the sample after annealing and is attributed to the partial coherent nature of twin boundary in the later case. Further, growth of twins was independent of the strain distribution between parent and twinned grains. - Highlights: • An ‘ex situ’ EBSD of pure Mg during annealing was investigated. • Nucleation of no new grains was observed. • Significant growth of off-basal twin orientation was observed. • Growth of twins may be attributed to the partial coherent nature of twin boundary.
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
International Nuclear Information System (INIS)
Young, J.M.; Scovell, P.D.
1982-01-01
A process for annealing crystal damage in ion implanted semiconductor devices in which the device is rapidly heated to a temperature between 450 and 900 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. (author)
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
Track Reconstruction in the ATLAS Experiment The Deterministic Annealing Filter
Fleischmann, S
2006-01-01
The reconstruction of the trajectories of charged particles is essential for experiments at the LHC. The experiments contain precise tracking systems structured in layers around the collision point which measure the positions where particle trajectories intersect those layers. The physics analysis on the other hand mainly needs the momentum and direction of the particle at the estimated creation or reaction point. It is therefore needed to determine these parameters from the initial measurements. At the LHC one has to deal with high backgrounds while even small deficits or artifacts can reduce the signal or may produce additional background after event selection. The track reconstruction does not only contain the estimation of the track parameters, but also a pattern recognition deciding which measurements belong to a track and how many particle tracks can be found. Track reconstruction at the ATLAS experiment suffers from the high event rate at the LHC resulting in a high occupancy of the tracking devices. A...
International Nuclear Information System (INIS)
Young, J.M.; Scovell, P.D.
1981-01-01
A process for annealing crystal damage in ion implanted semiconductor devices is described in which the device is rapidly heated to a temperature between 450 and 600 0 C and allowed to cool. It has been found that such heating of the device to these relatively low temperatures results in rapid annealing. In one application the device may be heated on a graphite element mounted between electrodes in an inert atmosphere in a chamber. The process may be enhanced by the application of optical radiation from a Xenon lamp. (author)
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Cascade annealing: an overview
International Nuclear Information System (INIS)
Doran, D.G.; Schiffgens, J.O.
1976-04-01
Concepts and an overview of radiation displacement damage modeling and annealing kinetics are presented. Short-term annealing methodology is described and results of annealing simulations performed on damage cascades generated using the Marlowe and Cascade programs are included. Observations concerning the inconsistencies and inadequacies of current methods are presented along with simulation of high energy cascades and simulation of longer-term annealing
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
International Nuclear Information System (INIS)
Miller, M.G.; Koehn, B.W.; Chaplin, R.L.
1976-01-01
Characteristics of recovery processes have been investigated for cases of heating a sample to successively higher temperatures by means of isochronal annealing or by using a rapid pulse annealing. A recovery spectra shows the same features independent of which annealing procedure is used. In order to determine which technique provides the best resolution, a study was made of how two independent first-order processes are separated for different heating rates and time increments of the annealing pulses. It is shown that the pulse anneal method offers definite advantages over isochronal annealing when annealing for short time increments. Experimental data by means of the pulse anneal techniques are given for the various substages of stage I of aluminium. (author)
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
ICRP (1991) and deterministic effects
International Nuclear Information System (INIS)
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Deterministic chaos in entangled eigenstates
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Energy Technology Data Exchange (ETDEWEB)
Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)
2008-05-12
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
International Nuclear Information System (INIS)
Schlegel, K.G.; Foerster, S.
2008-01-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Optimization using quantum mechanics: quantum annealing through adiabatic evolution
International Nuclear Information System (INIS)
Santoro, Giuseppe E; Tosatti, Erio
2006-01-01
We review here some recent work in the field of quantum annealing, alias adiabatic quantum computation. The idea of quantum annealing is to perform optimization by a quantum adiabatic evolution which tracks the ground state of a suitable time-dependent Hamiltonian, where 'ℎ' is slowly switched off. We illustrate several applications of quantum annealing strategies, starting from textbook toy-models-double-well potentials and other one-dimensional examples, with and without disorder. These examples display in a clear way the crucial differences between classical and quantum annealing. We then discuss applications of quantum annealing to challenging hard optimization problems, such as the random Ising model, the travelling salesman problem and Boolean satisfiability problems. The techniques used to implement quantum annealing are either deterministic Schroedinger's evolutions, for the toy models, or path-integral Monte Carlo and Green's function Monte Carlo approaches, for the hard optimization problems. The crucial role played by disorder and the associated non-trivial Landau-Zener tunnelling phenomena is discussed and emphasized. (topical review)
Infrared thermal annealing device
International Nuclear Information System (INIS)
Gladys, M.J.; Clarke, I.; O'Connor, D.J.
2003-01-01
A device for annealing samples within an ultrahigh vacuum (UHV) scanning tunneling microscopy system was designed, constructed, and tested. The device is based on illuminating the sample with infrared radiation from outside the UHV chamber with a tungsten projector bulb. The apparatus uses an elliptical mirror to focus the beam through a sapphire viewport for low absorption. Experiments were conducted on clean Pd(100) and annealing temperatures in excess of 1000 K were easily reached
Deterministic and unambiguous dense coding
International Nuclear Information System (INIS)
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Deterministic computation of functional integrals
International Nuclear Information System (INIS)
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Blazej, Robert; Toriello, Nicholas; Emrich, Charles; Cohen, Richard N.; Koppel, Nitzan
2015-07-14
This invention provides novel variant cellulolytic enzymes having improved activity and/or stability. In certain embodiments the variant cellulotyic enzymes comprise a glycoside hydrolase with or comprising a substitution at one or more positions corresponding to one or more of residues F64, A226, and/or E246 in Thermobifida fusca Cel9A enzyme. In certain embodiments the glycoside hydrolase is a variant of a family 9 glycoside hydrolase. In certain embodiments the glycoside hydrolase is a variant of a theme B family 9 glycoside hydrolase.
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Use of deterministic methods in survey calculations for criticality problems
International Nuclear Information System (INIS)
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Design of deterministic interleaver for turbo codes
International Nuclear Information System (INIS)
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Proving Non-Deterministic Computations in Agda
Directory of Open Access Journals (Sweden)
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Reactor pressure vessel thermal annealing
International Nuclear Information System (INIS)
Lee, A.D.
1997-01-01
The steel plates and/or forgings and welds in the beltline region of a reactor pressure vessel (RPV) are subject to embrittlement from neutron irradiation. This embrittlement causes the fracture toughness of the beltline materials to be less than the fracture toughness of the unirradiated material. Material properties of RPVs that have been irradiated and embrittled are recoverable through thermal annealing of the vessel. The amount of recovery primarily depends on the level of the irradiation embrittlement, the chemical composition of the steel, and the annealing temperature and time. Since annealing is an option for extending the service lives of RPVs or establishing less restrictive pressure-temperature (P-T) limits; the industry, the Department of Energy (DOE) and the Nuclear Regulatory Commission (NRC) have assisted in efforts to determine the viability of thermal annealing for embrittlement recovery. General guidance for in-service annealing is provided in American Society for Testing and Materials (ASTM) Standard E 509-86. In addition, the American Society of Mechanical Engineers (ASME) Code Case N-557 addresses annealing conditions (temperature and duration), temperature monitoring, evaluation of loadings, and non-destructive examination techniques. The NRC thermal annealing rule (10 CFR 50.66) was approved by the Commission and published in the Federal Register on December 19, 1995. The Regulatory Guide on thermal annealing (RG 1.162) was processed in parallel with the rule package and was published on February 15, 1996. RG 1.162 contains a listing of issues that need to be addressed for thermal annealing of an RPV. The RG also provides alternatives for predicting re-embrittlement trends after the thermal anneal has been completed. This paper gives an overview of methodology and recent technical references that are associated with thermal annealing. Results from the DOE annealing prototype demonstration project, as well as NRC activities related to the
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.
Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker
2009-02-21
Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Directory of Open Access Journals (Sweden)
J Gordon Millichap
2003-01-01
Full Text Available The clinical manifestations in 15 patients (6 boys and 9 girls with middle interhemispheric variant (MIH of holoprosencephaly (HPE were compared with classic subtypes (alobar, semilobar, and lobar of HPE in a multicenter study at Stanford University School of Medicine and Lucile Packard Children’s Hospital; Children’s Hospital of Philadelphia; University of California at San Francisco; Texas Scottish Rite Hospital, Dallas; and Kennedy Krieger Institute, Baltimore, MD.
Placement by thermodynamic simulated annealing
International Nuclear Information System (INIS)
Vicente, Juan de; Lanchares, Juan; Hermida, Roman
2003-01-01
Combinatorial optimization problems arise in different fields of science and engineering. There exist some general techniques coping with these problems such as simulated annealing (SA). In spite of SA success, it usually requires costly experimental studies in fine tuning the most suitable annealing schedule. In this Letter, the classical integrated circuit placement problem is faced by Thermodynamic Simulated Annealing (TSA). TSA provides a new annealing schedule derived from thermodynamic laws. Unlike SA, temperature in TSA is free to evolve and its value is continuously updated from the variation of state functions as the internal energy and entropy. Thereby, TSA achieves the high quality results of SA while providing interesting adaptive features
Influence of alloying and secondary annealing on anneal hardening ...
Indian Academy of Sciences (India)
Unknown
Influence of alloying and secondary annealing on anneal hardening effect at sintered copper alloys. SVETLANA NESTOROVIC. Technical Faculty Bor, University of Belgrade, Bor, Yugoslavia. MS received 11 February 2004; revised 29 October 2004. Abstract. This paper reports results of investigation carried out on sintered ...
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Special Features of Induction Annealing of Friction Stir Welded Joints of Medium-Alloy Steels
Priymak, E. Yu.; Stepanchukova, A. V.; Bashirova, E. V.; Fot, A. P.; Firsova, N. V.
2018-01-01
Welded joints of medium-alloy steels XJY750 and 40KhN2MA are studied in the initial condition and after different variants of annealing. Special features of the phase transformations occurring in the welded steels are determined. Optimum modes of annealing are recommended for the studied welded joints of drill pipes, which provide a high level of mechanical properties including the case of impact loading.
DOE's annealing prototype demonstration projects
International Nuclear Information System (INIS)
Warren, J.; Nakos, J.; Rochau, G.
1997-01-01
One of the challenges U.S. utilities face in addressing technical issues associated with the aging of nuclear power plants is the long-term effect of plant operation on reactor pressure vessels (RPVs). As a nuclear plant operates, its RPV is exposed to neutrons. For certain plants, this neutron exposure can cause embrittlement of some of the RPV welds which can shorten the useful life of the RPV. This RPV embrittlement issue has the potential to affect the continued operation of a number of operating U.S. pressurized water reactor (PWR) plants. However, RPV material properties affected by long-term irradiation are recoverable through a thermal annealing treatment of the RPV. Although a dozen Russian-designed RPVs and several U.S. military vessels have been successfully annealed, U.S. utilities have stated that a successful annealing demonstration of a U.S. RPV is a prerequisite for annealing a licensed U.S. nuclear power plant. In May 1995, the Department of Energy's Sandia National Laboratories awarded two cost-shared contracts to evaluate the feasibility of annealing U.S. licensed plants by conducting an anneal of an installed RPV using two different heating technologies. The contracts were awarded to the American Society of Mechanical Engineers (ASME) Center for Research and Technology Development (CRTD) and MPR Associates (MPR). The ASME team completed its annealing prototype demonstration in July 1996, using an indirect gas furnace at the uncompleted Public Service of Indiana's Marble Hill nuclear power plant. The MPR team's annealing prototype demonstration was scheduled to be completed in early 1997, using a direct heat electrical furnace at the uncompleted Consumers Power Company's nuclear power plant at Midland, Michigan. This paper describes the Department's annealing prototype demonstration goals and objectives; the tasks, deliverables, and results to date for each annealing prototype demonstration; and the remaining annealing technology challenges
Simulated annealing and circuit layout
Aarts, E.H.L.; Laarhoven, van P.J.M.
1991-01-01
We discuss the problem of approximately sotvlng circuit layout problems by simulated annealing. For this we first summarize the theoretical concepts of the simulated annealing algorithm using Ihe theory of homogeneous and inhomogeneous Markov chains. Next we briefly review general aspects of the
Radiation annealing in cuprous oxide
DEFF Research Database (Denmark)
Vajda, P.
1966-01-01
Experimental results from high-intensity gamma-irradiation of cuprous oxide are used to investigate the annealing of defects with increasing radiation dose. The results are analysed on the basis of the Balarin and Hauser (1965) statistical model of radiation annealing, giving a square...
Deterministic and probabilistic approach to safety analysis
International Nuclear Information System (INIS)
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Diffusion in Deterministic Interacting Lattice Systems
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Safety margins in deterministic safety analysis
International Nuclear Information System (INIS)
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Design of deterministic OS for SPLC
International Nuclear Information System (INIS)
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop
2012-01-01
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events
Deterministic prediction of surface wind speed variations
Directory of Open Access Journals (Sweden)
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations
Energy Technology Data Exchange (ETDEWEB)
Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-12-12
Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.
DEFF Research Database (Denmark)
Sousa, Tiago M; Soares, Tiago; Morais, Hugo
2016-01-01
The massive use of distributed generation and electric vehicles will lead to a more complex management of the power system, requiring new approaches to be used in the optimal resource scheduling field. Electric vehicles with vehicle-to-grid capability can be useful for the aggregator players...... in the mitigation of renewable sources intermittency and in the ancillary services procurement. In this paper, an energy and ancillary services joint management model is proposed. A simulated annealing approach is used to solve the joint management for the following day, considering the minimization...... of the aggregator total operation costs. The case study considers a distribution network with 33-bus, 66 distributed generation and 2000 electric vehicles. The proposed simulated annealing is matched with a deterministic approach allowing an effective and efficient comparison. The simulated annealing presents...
Mechanics from Newton's laws to deterministic chaos
Scheck, Florian
2018-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 150 problems ...
Deterministic Diffusion in Delayed Coupled Maps
International Nuclear Information System (INIS)
Sozanski, M.
2005-01-01
Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)
Deterministic effects of interventional radiology procedures
International Nuclear Information System (INIS)
Shope, Thomas B.
1997-01-01
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Deterministic Chaos in Radon Time Variation
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.
2003-01-01
Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)
Radon time variations and deterministic chaos
Energy Technology Data Exchange (ETDEWEB)
Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V
2004-07-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0
Radon time variations and deterministic chaos
International Nuclear Information System (INIS)
Planinic, J.; Vukovic, B.; Radolic, V.
2004-01-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere
Deterministic SLIR model for tuberculosis disease mapping
Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat
2017-11-01
Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Mathematical foundation of quantum annealing
International Nuclear Information System (INIS)
Morita, Satoshi; Nishimori, Hidetoshi
2008-01-01
Quantum annealing is a generic name of quantum algorithms that use quantum-mechanical fluctuations to search for the solution of an optimization problem. It shares the basic idea with quantum adiabatic evolution studied actively in quantum computation. The present paper reviews the mathematical and theoretical foundations of quantum annealing. In particular, theorems are presented for convergence conditions of quantum annealing to the target optimal state after an infinite-time evolution following the Schroedinger or stochastic (Monte Carlo) dynamics. It is proved that the same asymptotic behavior of the control parameter guarantees convergence for both the Schroedinger dynamics and the stochastic dynamics in spite of the essential difference of these two types of dynamics. Also described are the prescriptions to reduce errors in the final approximate solution obtained after a long but finite dynamical evolution of quantum annealing. It is shown there that we can reduce errors significantly by an ingenious choice of annealing schedule (time dependence of the control parameter) without compromising computational complexity qualitatively. A review is given on the derivation of the convergence condition for classical simulated annealing from the view point of quantum adiabaticity using a classical-quantum mapping
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...
International Nuclear Information System (INIS)
Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.
2002-01-01
In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)
Experimental aspects of deterministic secure quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)
2008-07-01
Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.
Deterministic models for energy-loss straggling
International Nuclear Information System (INIS)
Prinja, A.K.; Gleicher, F.; Dunham, G.; Morel, J.E.
1999-01-01
Inelastic ion interactions with target electrons are dominated by extremely small energy transfers that are difficult to resolve numerically. The continuous-slowing-down (CSD) approximation is then commonly employed, which, however, only preserves the mean energy loss per collision through the stopping power, S(E) = ∫ 0 ∞ dEprime (E minus Eprime) σ s (E → Eprime). To accommodate energy loss straggling, a Gaussian distribution with the correct mean-squared energy loss (akin to a Fokker-Planck approximation in energy) is commonly used in continuous-energy Monte Carlo codes. Although this model has the unphysical feature that ions can be upscattered, it nevertheless yields accurate results. A multigroup model for energy loss straggling was recently presented for use in multigroup Monte Carlo codes or in deterministic codes that use multigroup data. The method has the advantage that the mean and mean-squared energy loss are preserved without unphysical upscatter and hence is computationally efficient. Results for energy spectra compared extremely well with Gaussian distributions under the idealized conditions for which the Gaussian may be considered to be exact. Here, the authors present more consistent comparisons by extending the method to accommodate upscatter and, further, compare both methods with exact solutions obtained from an analog Monte Carlo simulation, for a straight-ahead transport problem
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Deterministic Approach to Detect Heart Sound Irregularities
Directory of Open Access Journals (Sweden)
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
Deterministic dense coding and entanglement entropy
International Nuclear Information System (INIS)
Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.
2008-01-01
We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1
Analysis of pinching in deterministic particle separation
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Management of the Bohunice RPVs annealing procedures
International Nuclear Information System (INIS)
Repka, M.
1994-01-01
The program of annealing regeneration procedure of RPVs units 1 and 2 of NPP V-1 (EBO) realization in the year 1993, is the topic of this paper. In the paper the following steps are described in detail: the preparation works, the annealing procedure realization schedule and safety management: starting with zero conditions, assembling of annealing apparatus, annealing procedure, cooling down and disassembling procedure of annealing apparatus. At the end the programs of annealing of both RPVs including the dosimetry measurements are discussed and evaluated. (author). 3 figs
Energy Technology Data Exchange (ETDEWEB)
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
ZERODUR: deterministic approach for strength design
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
Global optimization and simulated annealing
Dekkers, A.; Aarts, E.H.L.
1988-01-01
In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing
Deterministic Echo State Networks Based Stock Price Forecasting
Directory of Open Access Journals (Sweden)
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
2017-01-01
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
Transmission power control in WSNs : from deterministic to cognitive methods
Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.
2018-01-01
Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms
The probabilistic approach and the deterministic licensing procedure
International Nuclear Information System (INIS)
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
Ensemble annealing of complex physical systems
Habeck, Michael
2015-01-01
Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is th...
Pattern Laser Annealing by a Pulsed Laser
Komiya, Yoshio; Hoh, Koichiro; Murakami, Koichi; Takahashi, Tetsuo; Tarui, Yasuo
1981-10-01
Preliminary experiments with contact-type pattern laser annealing were made for local polycrystallization of a-Si, local evaporation of a-Si and local formation of Ni-Si alloy. These experiments showed that the mask patterns can be replicated as annealed regions with a resolution of a few microns on substrates. To overcome shortcomings due to the contact type pattern annealing, a projection type reduction pattern laser annealing system is proposed for resistless low temperature pattern forming processes.
Local deterministic theory surviving the violation of Bell's inequalities
International Nuclear Information System (INIS)
Cormier-Delanoue, C.
1984-01-01
Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Rapid thermal annealing of phosphorus implanted silicon
International Nuclear Information System (INIS)
Lee, Y.H.; Pogany, A.; Harrison, H.B.; Williams, J.S.
1985-01-01
Rapid thermal annealing (RTA) of phosphorus-implanted silicon has been investigated by four point probe, Van der Pauw methods and transmission electron microscopy. The results have been compared to furnace annealing. Experiments show that RTA, even at temperatures as low as 605 deg C, results in good electrical properties with little remnant damage and compares favourably with furnace annealing
Computational Multiqubit Tunnelling in Programmable Quantum Annealers
2016-08-25
ARTICLE Received 3 Jun 2015 | Accepted 26 Nov 2015 | Published 7 Jan 2016 Computational multiqubit tunnelling in programmable quantum annealers...state itself. Quantum tunnelling has been hypothesized as an advantageous physical resource for optimization in quantum annealing. However, computational ...qubit tunnelling plays a computational role in a currently available programmable quantum annealer. We devise a probe for tunnelling, a computational
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
Directory of Open Access Journals (Sweden)
Gregorius Satia Budhi
2003-01-01
Full Text Available Flexible Manufacturing System (FMS is a manufacturing system that is formed from several Numerical Controlled Machines combine with material handling system, so that different jobs can be worked by different machines sequences. FMS combine the high productivity and flexibility of Transfer Line and Job Shop manufacturing system. In this reasearch, Activity-Based Costing(ABC approach was used as the weight to search the operation route in the proper machine, so that the total production cost can be optimized. The search method that was used in this experiment is Simulated Annealling, a variant form Hill Climbing Search method. An ideal operation time to proses a part was used as the annealling schedule. From the empirical test, it could be proved that the use of ABC approach and Simulated Annealing to search the route (routing process can optimize the Total Production Cost. In the other hand, the use of ideal operation time to process a part as annealing schedule can control the processing time well. Abstract in Bahasa Indonesia : Flexible Manufacturing System (FMS adalah sistem manufaktur yang tersusun dari mesin-mesin Numerical Control (NC yang dikombinasi dengan Sistem Penanganan Material, sehingga job-job berbeda dikerjakan oleh mesin-mesin dengan alur yang berlainan. FMS menggabungkan produktifitas dan fleksibilitas yang tinggi dari Sistem Manufaktur Transfer Line dan Job Shop. Pada riset ini pendekatan Activity-Based Costing (ABC digunakan sebagai bobot / weight dalam pencarian rute operasi pada mesin yang tepat, untuk lebih mengoptimasi biaya produksi secara keseluruhan. Adapun metode Searching yang digunakan adalah Simulated Annealing yang merupakan varian dari metode searching Hill Climbing. Waktu operasi ideal untuk memproses sebuah part digunakan sebagai Annealing Schedulenya. Dari hasil pengujian empiris dapat dibuktikan bahwa penggunaan pendekatan ABC dan Simulated Annealing untuk proses pencarian rute (routing dapat lebih
Very fast simulated re-annealing
L. Ingber
1989-01-01
Draft An algorithm is developed to statistically find the best global fit of a nonlinear non-convex cost-function over a D-dimensional space. It is argued that this algorithm permits an annealing schedule for ‘‘temperature’’ T decreasing exponentially in annealing-time k, T = T0 exp(−ck1/D). The introduction of re-annealing also permits adaptation to changing sensitivities in the multidimensional parameter-space. This annealing schedule is faster than fast Cauchy annealing, ...
Computational algorithm for molybdenite concentrate annealing
International Nuclear Information System (INIS)
Alkatseva, V.M.
1995-01-01
Computational algorithm is presented for annealing of molybdenite concentrate with granulated return dust and that of granulated molybdenite concentrate. The algorithm differs from the known analogies for sulphide raw material annealing by including the calculation of return dust mass in stationary annealing; the latter quantity varies form the return dust mass value obtained in the first iteration step. Masses of solid products are determined by distribution of concentrate annealing products, including return dust and benthonite. The algorithm is applied to computations for annealing of other sulphide materials. 3 refs
Plasma assisted heat treatment: annealing
International Nuclear Information System (INIS)
Brunatto, S F; Guimaraes, N V
2009-01-01
This work comprises a new dc plasma application in the metallurgical-mechanical field, called plasma assisted heat treatment, and it presents the first results for annealing. Annealing treatments were performed in 90% reduction cold-rolled niobium samples at 900 deg. C and 60 min, in two different heating ways: (a) in a hollow cathode discharge (HCD) configuration and (b) in a plasma oven configuration. The evolution of the samples' recrystallization was determined by means of the microstructure, microhardness and softening rate characterization. The results indicate that plasma species (ions and neutrals) bombardment in HCD plays an important role in the recrystallization process activation and could lead to technological and economical advantages considering the metallic materials' heat treatment application. (fast track communication)
Deterministic effects of the ionizing radiation
International Nuclear Information System (INIS)
Raslawski, Elsa C.
2001-01-01
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
Simulated annealing model of acupuncture
Shang, Charles; Szu, Harold
2015-05-01
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the system is limited and related to the person's general health and age. Perturbation at acupoints can lead a stronger local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points (placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider distribution of the limited self-organizing activity. This model explains the following facts from systematic reviews of acupuncture trials: 1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above placebo 2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the number of local optima or comorbidities increases. 3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often diminishes. It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient's age. This is the first biological - physical model of acupuncture which can predict and guide clinical acupuncture research.
Annealing of ion implanted silicon
International Nuclear Information System (INIS)
Chivers, D.; Smith, B.J.; Stephen, J.; Fisher, M.
1980-09-01
The newer uses of ion implantation require a higher dose rate. This has led to the introduction of high beam current implanters; the wafers move in front of a stationary beam to give a scanning effect. This can lead to non-uniform heating of the wafer. Variations in the sheet resistance of the layers can be very non-uniform following thermal annealing. Non-uniformity in the effective doping both over a single wafer and from one wafer to another, can affect the usefulness of ion implantation in high dose rate applications. Experiments to determine the extent of non-uniformity in sheet resistance, and to see if it is correlated to the annealing scheme have been carried out. Details of the implantation parameters are given. It was found that best results were obtained when layers were annealed at the maximum possible temperature. For arsenic, phosphorus and antimony layers, improvements were observed up to 1200 0 C and boron up to 950 0 C. Usually, it is best to heat the layer directly to the maximum temperature to produce the most uniform layer; with phosphorus layers however it is better to pre-heat to 1050 0 C. (U.K.)
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
Strong white photoluminescence from annealed zeolites
International Nuclear Information System (INIS)
Bai, Zhenhua; Fujii, Minoru; Imakita, Kenji; Hayashi, Shinji
2014-01-01
The optical properties of zeolites annealed at various temperatures are investigated for the first time. The annealed zeolites exhibit strong white photoluminescence (PL) under ultraviolet light excitation. With increasing annealing temperature, the emission intensity of annealed zeolites first increases and then decreases. At the same time, the PL peak red-shifts from 495 nm to 530 nm, and then returns to 500 nm. The strongest emission appears when the annealing temperature is 500 °C. The quantum yield of the sample is measured to be ∼10%. The PL lifetime monotonously increases from 223 μs to 251 μs with increasing annealing temperature. The origin of white PL is ascribed to oxygen vacancies formed during the annealing process. -- Highlights: • The optical properties of zeolites annealed at various temperatures are investigated. • The annealed zeolites exhibit strong white photoluminescence. • The maximum PL enhancement reaches as large as 62 times. • The lifetime shows little dependence on annealing temperature. • The origin of white emission is ascribed to the oxygen vacancies
Kalscheuer, Vera M.; Hennig, Friederike; Leonard, Helen; Downs, Jenny; Clarke, Angus; Benke, Tim A.; Armstrong, Judith; Pineda, Mercedes; Bailey, Mark E.S.; Cobb, Stuart R.
2017-01-01
Objective: To provide new insights into the interpretation of genetic variants in a rare neurologic disorder, CDKL5 deficiency, in the contexts of population sequencing data and an updated characterization of the CDKL5 gene. Methods: We analyzed all known potentially pathogenic CDKL5 variants by combining data from large-scale population sequencing studies with CDKL5 variants from new and all available clinical cohorts and combined this with computational methods to predict pathogenicity. Results: The study has identified several variants that can be reclassified as benign or likely benign. With the addition of novel CDKL5 variants, we confirm that pathogenic missense variants cluster in the catalytic domain of CDKL5 and reclassify a purported missense variant as having a splicing consequence. We provide further evidence that missense variants in the final 3 exons are likely to be benign and not important to disease pathology. We also describe benign splicing and nonsense variants within these exons, suggesting that isoform hCDKL5_5 is likely to have little or no neurologic significance. We also use the available data to make a preliminary estimate of minimum incidence of CDKL5 deficiency. Conclusions: These findings have implications for genetic diagnosis, providing evidence for the reclassification of specific variants previously thought to result in CDKL5 deficiency. Together, these analyses support the view that the predominant brain isoform in humans (hCDKL5_1) is crucial for normal neurodevelopment and that the catalytic domain is the primary functional domain. PMID:29264392
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Efficient Integrative Multi-SNP Association Analysis via Deterministic Approximation of Posteriors.
Wen, Xiaoquan; Lee, Yeji; Luca, Francesca; Pique-Regi, Roger
2016-06-02
With the increasing availability of functional genomic data, incorporating genomic annotations into genetic association analysis has become a standard procedure. However, the existing methods often lack rigor and/or computational efficiency and consequently do not maximize the utility of functional annotations. In this paper, we propose a rigorous inference procedure to perform integrative association analysis incorporating genomic annotations for both traditional GWASs and emerging molecular QTL mapping studies. In particular, we propose an algorithm, named deterministic approximation of posteriors (DAP), which enables highly efficient and accurate joint enrichment analysis and identification of multiple causal variants. We use a series of simulation studies to highlight the power and computational efficiency of our proposed approach and further demonstrate it by analyzing the cross-population eQTL data from the GEUVADIS project and the multi-tissue eQTL data from the GTEx project. In particular, we find that genetic variants predicted to disrupt transcription factor binding sites are enriched in cis-eQTLs across all tissues. Moreover, the enrichment estimates obtained across the tissues are correlated with the cell types for which the annotations are derived. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Quantum annealing for combinatorial clustering
Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph
2018-02-01
Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.
Loviisa Unit One: Annealing - healing
Energy Technology Data Exchange (ETDEWEB)
Kohopaeae, J.; Virsu, R. [ed.; Henriksson, A. [ed.
1997-11-01
Unit 1 of the Loviisa nuclear powerplant was annealed in connection with the refuelling outage in the summer of 1996. This type of heat treatment restored the toughness properties of the pressure vessel weld, which had been embrittled be neutron radiation, so that it is almost equivalent to a new weld. The treatment itself was an ordinary metallurgical procedure that took only a few days. But the material studies that preceded it began over fifteen years ago and have put IVO at the forefront of world-wide expertise in the area of radiation embrittlement
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Towards deterministic optical quantum computation with coherently driven atomic ensembles
International Nuclear Information System (INIS)
Petrosyan, David
2005-01-01
Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-01-01
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
Simulated annealing with constant thermodynamic speed
International Nuclear Information System (INIS)
Salamon, P.; Ruppeiner, G.; Liao, L.; Pedersen, J.
1987-01-01
Arguments are presented to the effect that the optimal annealing schedule for simulated annealing proceeds with constant thermodynamic speed, i.e., with dT/dt = -(v T)/(ε-√C), where T is the temperature, ε- is the relaxation time, C ist the heat capacity, t is the time, and v is the thermodynamic speed. Experimental results consistent with this conjecture are presented from simulated annealing on graph partitioning problems. (orig.)
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Deterministic Predictions of Vessel Responses Based on Past Measurements
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam; Jensen, Jørgen Juncher
2017-01-01
The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...
About the Possibility of Creation of a Deterministic Unified Mechanics
International Nuclear Information System (INIS)
Khomyakov, G.K.
2005-01-01
The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
The State of Deterministic Thinking among Mothers of Autistic Children
Directory of Open Access Journals (Sweden)
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
Deterministic multimode photonic device for quantum-information processing
DEFF Research Database (Denmark)
Nielsen, Anne E. B.; Mølmer, Klaus
2010-01-01
We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...
Deterministic Chaos - Complex Chance out of Simple Necessity ...
Indian Academy of Sciences (India)
This is a very lucid and lively book on deterministic chaos. Chaos is very common in nature. However, the understanding and realisation of its potential applications is very recent. Thus this book is a timely addition to the subject. There are several books on chaos and several more are being added every day. In spite of this ...
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Deterministic teleportation using single-photon entanglement as a resource
DEFF Research Database (Denmark)
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Empirical and deterministic accuracies of across-population genomic prediction
Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.
2015-01-01
Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which
A Deterministic Approach to the Synchronization of Cellular Automata
Garcia, J.; Garcia, P.
2011-01-01
In this work we introduce a deterministic scheme of synchronization of linear and nonlinear cellular automata (CA) with complex behavior, connected through a master-slave coupling. By using a definition of Boolean derivative, we use the linear approximation of the automata to determine a function of coupling that promotes synchronization without perturbing all the sites of the slave system.
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Mixed motion in deterministic ratchets due to anisotropic permeability
Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.
2011-01-01
Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing
DEFF Research Database (Denmark)
Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina
2016-01-01
Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...
Langevin equation with the deterministic algebraically correlated noise
International Nuclear Information System (INIS)
Ploszajczak, M.; Srokowski, T.
1995-01-01
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)
Deterministic dense coding and faithful teleportation with multipartite graph states
International Nuclear Information System (INIS)
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-01-01
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Deterministic algorithms for multi-criteria Max-TSP
Manthey, Bodo
2012-01-01
We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
Deterministic Role of Collision Cascade Density in Radiation Defect Dynamics in Si
Wallace, J. B.; Aji, L. B. Bayu; Shao, L.; Kucheyev, S. O.
2018-05-01
The formation of stable radiation damage in solids often proceeds via complex dynamic annealing (DA) processes, involving point defect migration and interaction. The dependence of DA on irradiation conditions remains poorly understood even for Si. Here, we use a pulsed ion beam method to study defect interaction dynamics in Si bombarded in the temperature range from ˜-30 ° C to 210 °C with ions in a wide range of masses, from Ne to Xe, creating collision cascades with different densities. We demonstrate that the complexity of the influence of irradiation conditions on defect dynamics can be reduced to a deterministic effect of a single parameter, the average cascade density, calculated by taking into account the fractal nature of collision cascades. For each ion species, the DA rate exhibits two well-defined Arrhenius regions where different DA mechanisms dominate. These two regions intersect at a critical temperature, which depends linearly on the cascade density. The low-temperature DA regime is characterized by an activation energy of ˜0.1 eV , independent of the cascade density. The high-temperature regime, however, exhibits a change in the dominant DA process for cascade densities above ˜0.04 at.%, evidenced by an increase in the activation energy. These results clearly demonstrate a crucial role of the collision cascade density and can be used to predict radiation defect dynamics in Si.
Variants of cellobiohydrolases
Energy Technology Data Exchange (ETDEWEB)
Bott, Richard R.; Foukaraki, Maria; Hommes, Ronaldus Wilhelmus; Kaper, Thijs; Kelemen, Bradley R.; Kralj, Slavko; Nikolaev, Igor; Sandgren, Mats; Van Lieshout, Johannes Franciscus Thomas; Van Stigt Thans, Sander
2018-04-10
Disclosed are a number of homologs and variants of Hypocrea jecorina Ce17A (formerly Trichoderma reesei cellobiohydrolase I or CBH1), nucleic acids encoding the same and methods for producing the same. The homologs and variant cellulases have the amino acid sequence of a glycosyl hydrolase of family 7A wherein one or more amino acid residues are substituted and/or deleted.
GPU accelerated population annealing algorithm
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature
Laser annealing of ion implanted silicon
International Nuclear Information System (INIS)
White, C.W.; Narayan, J.; Young, R.T.
1978-11-01
The physical and electrical properties of ion implanted silicon annealed with high powered ruby laser radiation are summarized. Results show that pulsed laser annealing can lead to a complete removal of extended defects in the implanted region accompanied by incorporation of dopants into lattice sites even when their concentration far exceeds the solid solubility limit
Modernizing quantum annealing using local searches
International Nuclear Information System (INIS)
Chancellor, Nicholas
2017-01-01
I describe how real quantum annealers may be used to perform local (in state space) searches around specified states, rather than the global searches traditionally implemented in the quantum annealing algorithm (QAA). Such protocols will have numerous advantages over simple quantum annealing. By using such searches the effect of problem mis-specification can be reduced, as only energy differences between the searched states will be relevant. The QAA is an analogue of simulated annealing, a classical numerical technique which has now been superseded. Hence, I explore two strategies to use an annealer in a way which takes advantage of modern classical optimization algorithms. Specifically, I show how sequential calls to quantum annealers can be used to construct analogues of population annealing and parallel tempering which use quantum searches as subroutines. The techniques given here can be applied not only to optimization, but also to sampling. I examine the feasibility of these protocols on real devices and note that implementing such protocols should require minimal if any change to the current design of the flux qubit-based annealers by D-Wave Systems Inc. I further provide proof-of-principle numerical experiments based on quantum Monte Carlo that demonstrate simple examples of the discussed techniques. (paper)
Annealed star-branched polyelectrolytes in solution
Klein Wolterink, J.; Male, van J.; Cohen Stuart, M.A.; Koopal, L.K.; Zhulina, E.B.; Borisov, O.V.
2002-01-01
Equilibrium conformations of annealed star-branched polyelectrolytes (polyacids) are calculated with a numerical self-consistent-field (SCF) model. From the calculations we obtain also the size and charge of annealed polyelectrolyte stars as a function of the number of arms, pH, and the ionic
Understanding the microwave annealing of silicon
Directory of Open Access Journals (Sweden)
Chaochao Fu
2017-03-01
Full Text Available Though microwave annealing appears to be very appealing due to its unique features, lacking an in-depth understanding and accurate model hinder its application in semiconductor processing. In this paper, the physics-based model and accurate calculation for the microwave annealing of silicon are presented. Both thermal effects, including ohmic conduction loss and dielectric polarization loss, and non-thermal effects are thoroughly analyzed. We designed unique experiments to verify the mechanism and extract relevant parameters. We also explicitly illustrate the dynamic interaction processes of the microwave annealing of silicon. This work provides an in-depth understanding that can expedite the application of microwave annealing in semiconductor processing and open the door to implementing microwave annealing for future research and applications.
Reduced annealing temperatures in silicon solar cells
Weinberg, I.; Swartz, C. K.
1981-01-01
Cells irradiated to a fluence of 5x10,000,000,000,000/square cm showed short circuit current on annealing at 200 C, with complete annealing occurring at 275 C. Cells irradiated to 100,000,000,000,000/square cm showed a reduction in annealing temperature from the usual 500 to 300 C. Annealing kinetic studies yield an activation energy of (1.5 + or - 2) eV for the low fluence, low temperature anneal. Comparison with activation energies previously obtained indicate that the presently obtained activation energy is consistent with the presence of either the divacancy or the carbon interstitial carbon substitutional pair, a result which agrees with the conclusion based on defect behavior in boron-doped silicon.
Electrical properties and annealing kinetics study of laser-annealed ion-implanted silicon
International Nuclear Information System (INIS)
Wang, K.L.; Liu, Y.S.; Kirkpatrick, C.G.; Possin, G.E.
1979-01-01
This paper describes measurements of electrical properties and the regrowth behavior of ion-implanted silicon annealed with an 80-ns (FWHM) laser pulse at 1.06 μm. The experimental results include: (1) a determination of threshold energy density required for melting using a transient optical reflectivity technique, (2) measurements of dopant distribution using Rutherford backscattering spectroscopy, (3) characterization of electrical properties by measuring reverse leakage current densities of laser-annealed and thermal-annealed mesa diodes, (4) determination of annealed junction depth using an electron-beam-induced-current technique, and (5) a deep-level-transient spectroscopic study of residual defects. In particular, by measuring these properties of a diode annealed at a condition near the threshold energy density for liquid phase epitaxial regrowth, we have found certain correlations among these various annealing behaviors and electrical properties of laser-annealed ion-implanted silicon diodes
A deterministic-probabilistic model for contaminant transport. User manual
Energy Technology Data Exchange (ETDEWEB)
Schwartz, F W; Crowe, A
1980-08-01
This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)
Deterministic chaos at the ocean surface: applications and interpretations
Directory of Open Access Journals (Sweden)
A. J. Palmer
1998-01-01
Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Deterministic Brownian motion generated from differential delay equations.
Lei, Jinzhi; Mackey, Michael C
2011-10-01
This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.
Progress in nuclear well logging modeling using deterministic transport codes
International Nuclear Information System (INIS)
Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.
2002-01-01
Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)
Deterministic blade row interactions in a centrifugal compressor stage
Kirtley, K. R.; Beach, T. A.
1991-01-01
The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.
One-step deterministic multipartite entanglement purification with linear optics
Energy Technology Data Exchange (ETDEWEB)
Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)
2012-01-09
We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.
Cylinder packing by simulated annealing
Directory of Open Access Journals (Sweden)
M. Helena Correia
2000-12-01
Full Text Available This paper is motivated by the problem of loading identical items of circular base (tubes, rolls, ... into a rectangular base (the pallet. For practical reasons, all the loaded items are considered to have the same height. The resolution of this problem consists in determining the positioning pattern of the circular bases of the items on the rectangular pallet, while maximizing the number of items. This pattern will be repeated for each layer stacked on the pallet. Two algorithms based on the meta-heuristic Simulated Annealing have been developed and implemented. The tuning of these algorithms parameters implied running intensive tests in order to improve its efficiency. The algorithms developed were easily extended to the case of non-identical circles.Este artigo aborda o problema de posicionamento de objetos de base circular (tubos, rolos, ... sobre uma base retangular de maiores dimensões. Por razões práticas, considera-se que todos os objetos a carregar apresentam a mesma altura. A resolução do problema consiste na determinação do padrão de posicionamento das bases circulares dos referidos objetos sobre a base de forma retangular, tendo como objetivo a maximização do número de objetos estritamente posicionados no interior dessa base. Este padrão de posicionamento será repetido em cada uma das camadas a carregar sobre a base retangular. Apresentam-se dois algoritmos para a resolução do problema. Estes algoritmos baseiam-se numa meta-heurística, Simulated Annealling, cuja afinação de parâmetros requereu a execução de testes intensivos com o objetivo de atingir um elevado grau de eficiência no seu desempenho. As características dos algoritmos implementados permitiram que a sua extensão à consideração de círculos com raios diferentes fosse facilmente conseguida.
... Headaches in Children FAQ Migraine Variants In Children Children Get Migraines Too! Learn More Migraine Information Find Help Doctors & Resources Get Connected Join the Conversation Follow Us on Social Media Company About News Resources Privacy Policy Contact Phone: ...
Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly
Directory of Open Access Journals (Sweden)
Mehdi Sharifi
2017-12-01
Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important.
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Directory of Open Access Journals (Sweden)
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Langevin equation with the deterministic algebraically correlated noise
Energy Technology Data Exchange (ETDEWEB)
Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.
Beeping a Deterministic Time-Optimal Leader Election
Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy
2018-01-01
The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...
Are deterministic methods suitable for short term reserve planning?
International Nuclear Information System (INIS)
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Convergence studies of deterministic methods for LWR explicit reflector methodology
International Nuclear Information System (INIS)
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-01-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Applicability of deterministic methods in seismic site effects modeling
International Nuclear Information System (INIS)
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
DEFF Research Database (Denmark)
Sousa, Tiago M; Morais, Hugo; Castro, R.
2014-01-01
scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution...... to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach...
Global warming: Temperature estimation in annealers
Directory of Open Access Journals (Sweden)
Jack Raymond
2016-11-01
Full Text Available Sampling from a Boltzmann distribution is NP-hard and so requires heuristic approaches. Quantum annealing is one promising candidate. The failure of annealing dynamics to equilibrate on practical time scales is a well understood limitation, but does not always prevent a heuristically useful distribution from being generated. In this paper we evaluate several methods for determining a useful operational temperature range for annealers. We show that, even where distributions deviate from the Boltzmann distribution due to ergodicity breaking, these estimates can be useful. We introduce the concepts of local and global temperatures that are captured by different estimation methods. We argue that for practical application it often makes sense to analyze annealers that are subject to post-processing in order to isolate the macroscopic distribution deviations that are a practical barrier to their application.
Coherent Coupled Qubits for Quantum Annealing
Weber, Steven J.; Samach, Gabriel O.; Hover, David; Gustavsson, Simon; Kim, David K.; Melville, Alexander; Rosenberg, Danna; Sears, Adam P.; Yan, Fei; Yoder, Jonilyn L.; Oliver, William D.; Kerman, Andrew J.
2017-07-01
Quantum annealing is an optimization technique which potentially leverages quantum tunneling to enhance computational performance. Existing quantum annealers use superconducting flux qubits with short coherence times limited primarily by the use of large persistent currents Ip. Here, we examine an alternative approach using qubits with smaller Ip and longer coherence times. We demonstrate tunable coupling, a basic building block for quantum annealing, between two flux qubits with small (approximately 50-nA) persistent currents. Furthermore, we characterize qubit coherence as a function of coupler setting and investigate the effect of flux noise in the coupler loop on qubit coherence. Our results provide insight into the available design space for next-generation quantum annealers with improved coherence.
Annealing behavior of high permeability amorphous alloys
International Nuclear Information System (INIS)
Rabenberg, L.
1980-06-01
Effects of low temperature annealing on the magnetic properties of the amorphous alloy Co 71 4 Fe 4 6 Si 9 6 B 14 4 were investigated. Annealing this alloy below 400 0 C results in magnetic hardening; annealing above 400 0 C but below the crystallization temperature results in magnetic softening. Above the crystallization temperature the alloy hardens drastically and irreversibly. Conventional and high resolution transmission electron microscopy were used to show that the magnetic property changes at low temperatures occur while the alloy is truly amorphous. By imaging the magnetic microstructures, Lorentz electron microscopy has been able to detect the presence of microscopic inhomogeneities in this alloy. The low temperature annealing behavior of this alloy has been explained in terms of atomic pair ordering in the presence of the internal molecular field. Lorentz electron microscopy has been used to confirm this explanation
Irradiation embrittlement and optimisation of annealing
International Nuclear Information System (INIS)
1993-01-01
This conference is composed of 30 papers grouped in 6 sessions related to the following themes: neutron irradiation effects in pressure vessel steels and weldments used in PWR, WWER and BWR nuclear plants; results from surveillance programmes (irradiation induced damage and annealing processes); studies on the influence of variations in irradiation conditions and mechanisms, and modelling; mitigation of irradiation effects, especially through thermal annealing; mechanical test procedures and specimen size effects
Structural relaxation in annealed hyperquenched basaltic glasses
DEFF Research Database (Denmark)
Guo, Xiaoju; Mauro, John C.; Potuzak, M.
2012-01-01
The enthalpy relaxation behavior of hyperquenched (HQ) and annealed hyperquenched (AHQ) basaltic glass is investigated through calorimetric measurements. The results reveal a common onset temperature of the glass transition for all the HQ and AHQ glasses under study, indicating that the primary...... relaxation is activated at the same temperature regardless of the initial departure from equilibrium. The analysis of secondary relaxation at different annealing temperatures provides insights into the enthalpy recovery of HQ glasses....
Irradiation embrittlement and optimisation of annealing
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-12-31
This conference is composed of 30 papers grouped in 6 sessions related to the following themes: neutron irradiation effects in pressure vessel steels and weldments used in PWR, WWER and BWR nuclear plants; results from surveillance programmes (irradiation induced damage and annealing processes); studies on the influence of variations in irradiation conditions and mechanisms, and modelling; mitigation of irradiation effects, especially through thermal annealing; mechanical test procedures and specimen size effects.
Boosting quantum annealer performance via sample persistence
Karimi, Hamed; Rosenberg, Gili
2017-07-01
We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.
Energy Saving in Industrial Annealing Furnaces
Directory of Open Access Journals (Sweden)
Fatma ÇANKA KILIÇ
2018-03-01
Full Text Available In this study, an energy efficiency studies have been carried out in a natural gas-fired rolling mill annealing furnace of an industrial establishment. In this context, exhaust gas from the furnace has been examined in terms of waste heat potential. In the examinations that have been made in detail; waste heat potential was found as 3.630,31 kW. Technical and feasibility studies have been carried out to realize electricity production through an Organic Rankine Cycle (ORC system for evaluating the waste heat potential of the annealing furnace. It has been calculated that 1.626.378,88 kWh/year of electricity can be generated by using the exhaust gas waste heat of the annealing furnace through an ORC system to produce electric energy with a net efficiency of 16%. The financial value of this energy was determined as 436.032,18 TL/year and the simple repayment period of the investment was 8,12 years. Since the annealing period of the annealing furnace is 2800 hours/year, the investment has not been found to be feasible in terms of the feasibility studies. However, the investment suitability can be assured when the annealing furnace is operating at full capacity for 8,000 hours or more annually.
Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter
2015-01-20
While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.
A deterministic model for the growth of non-conducting electrical tree structures
International Nuclear Information System (INIS)
Dodd, S J
2003-01-01
Electrical treeing is of interest to the electrical generation, transmission and distribution industries as it is one of the causes of insulation failure in electrical machines, switchgear and transformer bushings. In this paper a deterministic electrical tree growth model is described. The model is based on electrostatics and local electron avalanches to model partial discharge activity within the growing tree structure. Damage to the resin surrounding the tree structure is dependent on the local electrostatic energy dissipation by partial discharges within the tree structure and weighted by the magnitudes of the local electric fields in the resin surrounding the tree structure. The model is successful in simulating the formation of branched structures without the need of a random variable, a requirement of previous stochastic models. Instability in the spatial development of partial discharges within the tree structure takes the role of the stochastic element as used in previous models to produce branched tree structures. The simulated electrical trees conform to the experimentally observed behaviour; tree length versus time and electrical tree growth rate as a function of applied voltage for non-conducting electrical trees. The phase synchronous partial discharge activity and the spatial distribution of emitted light from the tree structure are also in agreement with experimental data for non-conducting trees as grown in a flexible epoxy resin and in polyethylene. The fact that similar tree growth behaviour is found using pure amorphous (epoxy resin) and semicrystalline (polyethylene) materials demonstrate that neither annealed or quenched noise, representing material inhomogeneity, is required for the formation of irregular branched structures (electrical trees). Instead, as shown in this paper, branched growth can occur due to the instability of individual discharges within the tree structure
Towards deterministically controlled InGaAs/GaAs lateral quantum dot molecules
International Nuclear Information System (INIS)
Wang, L; Rastelli, A; Kiravittaya, S; Atkinson, P; Schmidt, O G; Ding, F; Bufon, C C Bof; Hermannstaedter, C; Witzany, M; Beirne, G J; Michler, P
2008-01-01
We report on the fabrication, detailed characterization and modeling of lateral InGaAs quantum dot molecules (QDMs) embedded in a GaAs matrix and we discuss strategies to fully control their spatial configuration and electronic properties. The three-dimensional morphology of encapsulated QDMs was revealed by selective wet chemical etching of the GaAs top capping layer and subsequent imaging by atomic force microscopy (AFM). The AFM investigation showed that different overgrowth procedures have a profound consequence on the QDM height and shape. QDMs partially capped and annealed in situ for micro-photoluminescence spectroscopy consist of shallow but well-defined quantum dots (QDs) in contrast to misleading results usually provided by surface morphology measurements when they are buried by a thin GaAs layer. This uncapping approach is crucial for determining the QDM structural parameters, which are required for modeling the system. A single-band effective-mass approximation is employed to calculate the confined electron and heavy-hole energy levels, taking the geometry and structural information extracted from the uncapping experiments as inputs. The calculated transition energy of the single QDM shows good agreement with the experimentally observed values. By decreasing the edge-to-edge distance between the two QDs within a QDM, a splitting of the electron (hole) wavefunction into symmetric and antisymmetric states is observed, indicating the presence of lateral coupling. Site control of such lateral QDMs obtained by growth on a pre-patterned substrate, combined with a technology to fabricate gate structures at well-defined positions with respect to the QDMs, could lead to deterministically controlled devices based on QDMs
Deterministic quantum state transfer and remote entanglement using microwave photons.
Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A
2018-06-01
Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual
Deterministic nonlinear phase gates induced by a single qubit
Park, Kimin; Marek, Petr; Filip, Radim
2018-05-01
We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
. In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....
Deterministic Single-Photon Source for Distributed Quantum Networking
International Nuclear Information System (INIS)
Kuhn, Axel; Hennrich, Markus; Rempe, Gerhard
2002-01-01
A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing
On the progress towards probabilistic basis for deterministic codes
International Nuclear Information System (INIS)
Ellyin, F.
1975-01-01
Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Enhanced deterministic phase retrieval using a partially developed speckle field
DEFF Research Database (Denmark)
Almoro, Percival F.; Waller, Laura; Agour, Mostafa
2012-01-01
A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.
2005-01-01
Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)
Origin of reverse annealing effect in hydrogen-implanted silicon
Energy Technology Data Exchange (ETDEWEB)
Di, Zengfeng [Los Alamos National Laboratory; Nastasi, Michael A [Los Alamos National Laboratory; Wang, Yongqiang [Los Alamos National Laboratory
2009-01-01
In contradiction to conventional damage annealing, thermally annealed H-implanted Si exhibits an increase in damage or reverse annealing behavior, whose mechanism has remained elusive. On the basis of quantitative high resolution transmission electron microscopy combined with channeling Rutherford backscattering analysis, we conclusively elucidate that the reverse annealing effect is due to the nucleation and growth of hydrogen-induce platelets. Platelets are responsible for an increase in the height and width the channeling damage peak following increased isochronal anneals.
Quantum Annealing and Quantum Fluctuation Effect in Frustrated Ising Systems
Tanaka, Shu; Tamura, Ryo
2012-01-01
Quantum annealing method has been widely attracted attention in statistical physics and information science since it is expected to be a powerful method to obtain the best solution of optimization problem as well as simulated annealing. The quantum annealing method was incubated in quantum statistical physics. This is an alternative method of the simulated annealing which is well-adopted for many optimization problems. In the simulated annealing, we obtain a solution of optimization problem b...
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
International Nuclear Information System (INIS)
S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS
1998-01-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Directory of Open Access Journals (Sweden)
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Stochastic and deterministic causes of streamer branching in liquid dielectrics
International Nuclear Information System (INIS)
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-01-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
International Nuclear Information System (INIS)
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
A study of deterministic models for quantum mechanics
International Nuclear Information System (INIS)
Sutherland, R.
1980-01-01
A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Using MCBEND for neutron or gamma-ray deterministic calculations
Directory of Open Access Journals (Sweden)
Geoff Dobson
2017-01-01
Full Text Available MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
Using MCBEND for neutron or gamma-ray deterministic calculations
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Shock-induced explosive chemistry in a deterministic sample configuration.
Energy Technology Data Exchange (ETDEWEB)
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Melting phenomenon and laser annealing in semiconductors
International Nuclear Information System (INIS)
Narayan, J.
1981-03-01
The work on annealing of displacement damage, dissolution of boron precipitates, and the broadening of dopant profiles in semiconductors after treating with ruby and dye laser pulses is reviewed in order to provide convincing evidence for the melting phenomenon and illustrate the mechanism associated with laser annealing. The nature of the solid-liquid interface and the interface instability during rapid solidification is considered in detail. It is shown that solute concentrations after pulsed laser annealing can far exceed retrograde maxima values. However, there is a critical solute concentration above which a planar solid-liquid interface becomes unstable and breaks into a cellular structure. The solute concentrations and cell sizes associated with this instability are calculated using a perturbation theory, and compared with experimental results
Hydrogen Annealing Of Single-Crystal Superalloys
Smialek, James L.; Schaeffer, John C.; Murphy, Wendy
1995-01-01
Annealing at temperature equal to or greater than 2,200 degrees F in atmosphere of hydrogen found to increase ability of single-crystal superalloys to resist oxidation when subsequently exposed to oxidizing atmospheres at temperatures almost as high. Supperalloys in question are principal constituents of hot-stage airfoils (blades) in aircraft and ground-based turbine engines; also used in other high-temperature applications like chemical-processing plants, coal-gasification plants, petrochemical refineries, and boilers. Hydrogen anneal provides resistance to oxidation without decreasing fatigue strength and without need for coating or reactive sulfur-gettering constituents. In comparison with coating, hydrogen annealing costs less. Benefits extend to stainless steels, nickel/chromium, and nickel-base alloys, subject to same scale-adhesion and oxidation-resistance considerations, except that scale is chromia instead of alumina.
Traffic Flow Optimization Using a Quantum Annealer
Directory of Open Access Journals (Sweden)
Florian Neukart
2017-12-01
Full Text Available Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum processing units (QPUs produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology’s usefulness for optimization and sampling tasks. In this paper, we present a real-world application that uses quantum technologies. Specifically, we show how to map certain parts of a real-world traffic flow optimization problem to be suitable for quantum annealing. We show that time-critical optimization tasks, such as continuous redistribution of position data for cars in dense road networks, are suitable candidates for quantum computing. Due to the limited size and connectivity of current-generation D-Wave QPUs, we use a hybrid quantum and classical approach to solve the traffic flow problem.
Histone variants and lipid metabolism
Borghesan, Michela; Mazzoccoli, Gianluigi; Sheedfar, Fareeba; Oben, Jude; Pazienza, Valerio; Vinciguerra, Manlio
2014-01-01
Within nucleosomes, canonical histones package the genome, but they can be opportunely replaced with histone variants. The incorporation of histone variants into the nucleosome is a chief cellular strategy to regulate transcription and cellular metabolism. In pathological terms, cellular steatosis
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)
International Nuclear Information System (INIS)
2014-01-01
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References
Strain of laser annealed silicon surfaces
Nemanich, R. J.; Haneman, D.
1982-05-01
High resolution Raman scattering measurements have been carried out on pulse and continuous-wave laser annealed silicon samples with various surface preparations. These included polished and ion-bombarded wafers, and saw-cut crystals. The pulse annealing treatments were carried out in ultrahigh vacuum and in air. The residual strain was inferred from the frequency shift of the first-order Raman active mode of Si, and was detectable in the range 10-2-10-3 in all except the polished samples.
Thermal annealing of tilted fiber Bragg gratings
González-Vila, Á.; Rodríguez-Cobo, L.; Mégret, P.; Caucheteur, C.; López-Higuera, J. M.
2016-05-01
We report a practical study of the thermal decay of cladding mode resonances in tilted fiber Bragg gratings, establishing an analogy with the "power law" evolution previously observed on uniform gratings. We examine how this process contributes to a great thermal stability, even improving it by means of a second cycle slightly increasing the annealing temperature. In addition, we show an improvement of the grating spectrum after annealing, with respect to the one just after inscription, which suggests the application of this method to be employed to improve saturation issues during the photo-inscription process.
Implantation annealing by scanning electron beam
International Nuclear Information System (INIS)
Jaussaud, C.; Biasse, B.; Cartier, A.M.; Bontemps, A.
1983-11-01
Samples of ion implanted silicon (BF 2 , 30keV, 10 15 ions x cm -2 ) have been annealed with a multiple scan electron beam, at temperatures ranging from 1000 to 1200 0 C. The curves of sheet resistance versus time show a minimum. Nuclear reaction measurements of the amount of boron remaining after annealing show that the increase in sheet resistance is due to a loss of boron. The increase in junction depths, measured by spreading resistance on bevels is between a few hundred A and 1000 A [fr
On the implementation of a deterministic secure coding protocol using polarization entangled photons
Ostermeyer, Martin; Walenta, Nino
2007-01-01
We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...
Vinci, Walter; Lidar, Daniel A.
2018-02-01
Nested quantum annealing correction (NQAC) is an error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. The encoding replaces each logical qubit by a complete graph of degree C . The nesting level C represents the distance of the error-correcting code and controls the amount of protection against thermal and control errors. Theoretical mean-field analyses and empirical data obtained with a D-Wave Two quantum annealer (supporting up to 512 qubits) showed that NQAC has the potential to achieve a scalable effective-temperature reduction, Teff˜C-η , with 0 temperature of a quantum annealer. Such effective-temperature reduction is relevant for machine-learning applications. Since we demonstrate that NQAC achieves error correction via a reduction of the effective-temperature of the quantum annealing device, our results address the problem of the "temperature scaling law for quantum annealers," which requires the temperature of quantum annealers to be reduced as problems of larger sizes are attempted to be solved.
International Nuclear Information System (INIS)
Sajnar, P.; Fiala, J.
1983-01-01
The problems are discussed of the mathematical description and simulation of temperature fields in annealing the closing weld of the steam generator jacket of the WWER 440 nuclear power plant. The basic principles are given of induction annealing, the method of calculating temperature fields is indicated and the mathematical description is given of boundary conditions on the outer and inner surfaces of the steam generator jacket for the computation of temperature fields arising during annealing. Also described are the methods of determining the temperature of exposed parts of heat exchange tubes inside the steam generator and the technical possibilities are assessed of the annealing equipment from the point of view of its computer simulation. Five alternatives are given for the computation of temperature fields in the area around the weld for different boundary conditions. The values are given of maximum differences in the temperatures of the metal in the annealed part of the steam generator jacket which allow the assessment of individual computation variants, this mainly from the point of view of observing the course of annealing temperature in the required width of the annealed jacket of the steam generator along both sides of the closing weld. (B.S.)
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
International Nuclear Information System (INIS)
Zio, Enrico
2014-01-01
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Minaret, a deterministic neutron transport solver for nuclear core calculations
International Nuclear Information System (INIS)
Moller, J-Y.; Lautard, J-J.
2011-01-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Distributed Design of a Central Service to Ensure Deterministic Behavior
Directory of Open Access Journals (Sweden)
Imran Ali Jokhio
2012-10-01
Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.
Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal
DEFF Research Database (Denmark)
Turajlic, Samra; Xu, Hang; Litchfield, Kevin
2018-01-01
The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show...... that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors...... outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance....
Minaret, a deterministic neutron transport solver for nuclear core calculations
Energy Technology Data Exchange (ETDEWEB)
Moller, J-Y.; Lautard, J-J., E-mail: jean-yves.moller@cea.fr, E-mail: jean-jacques.lautard@cea.fr [CEA - Centre de Saclay , Gif sur Yvette (France)
2011-07-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Deterministic secure communications using two-mode squeezed states
International Nuclear Information System (INIS)
Marino, Alberto M.; Stroud, C. R. Jr.
2006-01-01
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state
Deterministically entangling multiple remote quantum memories inside an optical cavity
Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun
2018-01-01
Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.
Energy Technology Data Exchange (ETDEWEB)
Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)
2014-12-15
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.
A deterministic model of nettle caterpillar life cycle
Syukriyah, Y.; Nuraini, N.; Handayani, D.
2018-03-01
Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.
Location deterministic biosensing from quantum-dot-nanowire assemblies
International Nuclear Information System (INIS)
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Variants of glycoside hydrolases
Teter, Sarah [Davis, CA; Ward, Connie [Hamilton, MT; Cherry, Joel [Davis, CA; Jones, Aubrey [Davis, CA; Harris, Paul [Carnation, WA; Yi, Jung [Sacramento, CA
2011-04-26
The present invention relates to variants of a parent glycoside hydrolase, comprising a substitution at one or more positions corresponding to positions 21, 94, 157, 205, 206, 247, 337, 350, 373, 383, 438, 455, 467, and 486 of amino acids 1 to 513 of SEQ ID NO: 2, and optionally further comprising a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2 a substitution at one or more positions corresponding to positions 8, 22, 41, 49, 57, 113, 193, 196, 226, 227, 246, 251, 255, 259, 301, 356, 371, 411, and 462 of amino acids 1 to 513 of SEQ ID NO: 2, wherein the variants have glycoside hydrolase activity. The present invention also relates to nucleotide sequences encoding the variant glycoside hydrolases and to nucleic acid constructs, vectors, and host cells comprising the nucleotide sequences.
Absorbing phase transitions in deterministic fixed-energy sandpile models
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Realization of deterministic quantum teleportation with solid state qubits
International Nuclear Information System (INIS)
Andreas Wallfraff
2014-01-01
Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)
Measures of thermodynamic irreversibility in deterministic and stochastic dynamics
International Nuclear Information System (INIS)
Ford, Ian J
2015-01-01
It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)
Deterministic Earthquake Hazard Assessment by Public Agencies in California
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Deterministic calculations of radiation doses from brachytherapy seeds
International Nuclear Information System (INIS)
Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos
2009-01-01
Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)
Finite-time thermodynamics and simulated annealing
International Nuclear Information System (INIS)
Andresen, B.
1989-01-01
When the general, global optimization technique simulated annealing was introduced by Kirkpatrick et al. (1983), this mathematical algorithm was based on an analogy to the statistical mechanical behavior of real physical systems like spin glasses, hence the name. In the intervening span of years the method has proven exceptionally useful for a great variety of extremely complicated problems, notably NP-problems like the travelling salesman, DNA sequencing, and graph partitioning. Only a few highly optimized heuristic algorithms (e.g. Lin, Kernighan 1973) have outperformed simulated annealing on their respective problems (Johnson et al. 1989). Simulated annealing in its current form relies only on the static quantity 'energy' to describe the system, whereas questions of rate, as in the temperature path (annealing schedule, see below), are left to intuition. We extent the connection to physical systems and take over further components from thermodynamics like ensemble, heat capacity, and relaxation time. Finally we refer to finite-time thermodynamics (Andresen, Salomon, Berry 1984) for a dynamical estimate of the optimal temperature path. (orig.)
Intelligent medical image processing by simulated annealing
International Nuclear Information System (INIS)
Ohyama, Nagaaki
1992-01-01
Image processing is being widely used in the medical field and already has become very important, especially when used for image reconstruction purposes. In this paper, it is shown that image processing can be classified into 4 categories; passive, active, intelligent and visual image processing. These 4 classes are explained at first through the use of several examples. The results show that the passive image processing does not give better results than the others. Intelligent image processing, then, is addressed, and the simulated annealing method is introduced. Due to the flexibility of the simulated annealing, formulated intelligence is shown to be easily introduced in an image reconstruction problem. As a practical example, 3D blood vessel reconstruction from a small number of projections, which is insufficient for conventional method to give good reconstruction, is proposed, and computer simulation clearly shows the effectiveness of simulated annealing method. Prior to the conclusion, medical file systems such as IS and C (Image Save and Carry) is pointed out to have potential for formulating knowledge, which is indispensable for intelligent image processing. This paper concludes by summarizing the advantages of simulated annealing. (author)
In situ annealing of hydroxyapatite thin films
International Nuclear Information System (INIS)
Johnson, Shevon; Haluska, Michael; Narayan, Roger J.; Snyder, Robert L.
2006-01-01
Hydroxyapatite is a bioactive ceramic that mimics the mineral composition of natural bone. Unfortunately, problems with adhesion, poor mechanical integrity, and incomplete bone ingrowth limit the use of many conventional hydroxyapatite surfaces. In this work, we have developed a novel technique to produce crystalline hydroxyapatite thin films involving pulsed laser deposition and postdeposition annealing. Hydroxyapatite films were deposited on Ti-6Al-4V alloy and Si (100) using pulsed laser deposition, and annealed within a high temperature X-ray diffraction system. The transformation from amorphous to crystalline hydroxyapatite was observed at 340 deg. C. Mechanical and adhesive properties were examined using nanoindentation and scratch adhesion testing, respectively. Nanohardness and Young's modulus values of 3.48 and 91.24 GPa were realized in unannealed hydroxyapatite films. Unannealed and 350 deg. C annealed hydroxyapatite films exhibited excellent adhesion to Ti-6Al-4V alloy substrates. We anticipate that the adhesion and biological properties of crystalline hydroxyapatite thin films may be enhanced by further consideration of deposition and annealing parameters
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Hardening of niobium alloys at precrystallization annealing
International Nuclear Information System (INIS)
Vasil'eva, E.V.; Pustovalov, V.A.
1989-01-01
Niobium base alloys were investigated. It is shown that precrystallization annealing of niobium-molybdenum, niobium-vanadium and niobium-zirconium alloys elevates much more sufficiently their resistance to microplastic strains, than to macroplastic strains. Hardening effect differs sufficiently for different alloys. The maximal hardening is observed for niobium-vanadium alloys, the minimal one - for niobium-zirconium alloys
Positron prevacancy effects in pure annealed metals
International Nuclear Information System (INIS)
Smedskjaer, L.C.
1981-06-01
The low-temperature prevacancy effects sometimes observed with positrons in well-annealed high-purity metals are discussed. It is shown that these effects are not experimental artifacts, but are due to trapping of the positrons. It is suggested that dislocations are responsible for these trapping effects. 46 references, 5 figures
Thin-film designs by simulated annealing
Boudet, T.; Chaton, P.; Herault, L.; Gonon, G.; Jouanet, L.; Keller, P.
1996-11-01
With the increasing power of computers, new methods in synthesis of optical multilayer systems have appeared. Among these, the simulated-annealing algorithm has proved its efficiency in several fields of physics. We propose to show its performances in the field of optical multilayer systems through different filter designs.
Thermal annealing in neutron-irradiated tribromobenzenes
DEFF Research Database (Denmark)
Siekierska, K.E.; Halpern, A.; Maddock, A. G.
1968-01-01
in the crystals was estimated by means of the 1,2-dibromoethylene exchange technique. The results suggest that, as a consequence of nuclear events, quite a number of different reactions occur whereas the principal annealing reaction is a recombination of atomic bromine with a dibromophenyl radical....
Job shop scheduling by simulated annealing
Laarhoven, van P.J.M.; Aarts, E.H.L.; Lenstra, J.K.
1992-01-01
We describe an approximation algorithm for the problem of finding the minimum makespan in a job shop. The algorithm is based on simulated annealing, a generalization of the well known iterative improvement approach to combinatorial optimization problems. The generalization involves the acceptance of
Entanglement in a Quantum Annealing Processor
2016-09-07
x ccjj. The annealing param- eter s is controlled with the global bias ΦxccjjðtÞ (see Appendix A for the mapping between s and Φxccjj and a...macroscopic rf SQUID param- eters : junction critical current Ic, qubit inductance Lq, and qubit capacitance Cq. We calibrate all of these parameters on this
Influence of Intercritical Annealing Temperature on Mechanical ...
African Journals Online (AJOL)
The fracture surfaces of the impact test samples were examined using the scanning electron microscope (SEM). Micros structural evolution of the samples was also examined with an optical microscope. The results showed that all the evaluated mechanical properties were improved by intercritical annealing, with the ...
Decorative properties of annealed Ti N coatings
International Nuclear Information System (INIS)
Klubovich, V.V.; Rubanik, V.V.; Bagrets, D.A.
2012-01-01
The decorative properties of annealed TiN coatings on austenitic stainless steel which were formed by vacuum-arc deposition wen investigated. It was shown the principal possibility to control colour characteristics of TiN films due to heat treatment at different temperature and time that expand their usage as decorative coatings. (authors).
Investigations of morphological changes during annealing of polyethylene single crystals
Tian, M.; Loos, J.
2001-01-01
The morphological evolution of isolated individual single crystals deposited on solid substrates was investigated during annealing experiments using in situ and ex situ atomic force microscopy techniques. The crystal morphology changed during annealing at temperatures slightly above the original
High-temperature annealing of graphite: A molecular dynamics study
Petersen, Andrew; Gillette, Victor
2018-05-01
A modified AIREBO potential was developed to simulate the effects of thermal annealing on the structure and physical properties of damaged graphite. AIREBO parameter modifications were made to reproduce Density Functional Theory interstitial results. These changes to the potential resulted in high-temperature annealing of the model, as measured by stored-energy reduction. These results show some resemblance to experimental high-temperature annealing results, and show promise that annealing effects in graphite are accessible with molecular dynamics and reactive potentials.
On lumped models for thermodynamic properties of simulated annealing problems
International Nuclear Information System (INIS)
Andresen, B.; Pedersen, J.M.; Salamon, P.; Hoffmann, K.H.; Mosegaard, K.; Nulton, J.
1987-01-01
The paper describes a new method for the estimation of thermodynamic properties for simulated annealing problems using data obtained during a simulated annealing run. The method works by estimating energy-to-energy transition probabilities and is well adapted to simulations such as simulated annealing, in which the system is never in equilibrium. (orig.)
A note on simulated annealing to computer laboratory scheduling ...
African Journals Online (AJOL)
The concepts, principles and implementation of simulated Annealing as a modem heuristic technique is presented. Simulated Annealing algorithm is used in solving real life problem of Computer Laboratory scheduling in order to maximize the use of scarce and insufficient resources. KEY WORDS: Simulated Annealing ...
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Accurate genotyping across variant classes and lengths using variant graphs
DEFF Research Database (Denmark)
Sibbesen, Jonas Andreas; Maretty, Lasse; Jensen, Jacob Malte
2018-01-01
of read k-mers to a graph representation of the reference and variants to efficiently perform unbiased, probabilistic genotyping across the variation spectrum. We demonstrate that BayesTyper generally provides superior variant sensitivity and genotyping accuracy relative to existing methods when used...... collecting a set of candidate variants across discovery methods, individuals and databases, and then realigning the reads to the variants and reference simultaneously. However, this realignment problem has proved computationally difficult. Here, we present a new method (BayesTyper) that uses exact alignment...... to integrate variants across discovery approaches and individuals. Finally, we demonstrate that including a ‘variation-prior’ database containing already known variants significantly improves sensitivity....
International Nuclear Information System (INIS)
Mohanta, Dusmanta Kumar; Sadhu, Pradip Kumar; Chakrabarti, R.
2007-01-01
This paper presents a comparison of results for optimization of captive power plant maintenance scheduling using genetic algorithm (GA) as well as hybrid GA/simulated annealing (SA) techniques. As utilities catered by captive power plants are very sensitive to power failure, therefore both deterministic and stochastic reliability objective functions have been considered to incorporate statutory safety regulations for maintenance of boilers, turbines and generators. The significant contribution of this paper is to incorporate stochastic feature of generating units and that of load using levelized risk method. Another significant contribution of this paper is to evaluate confidence interval for loss of load probability (LOLP) because some variations from optimum schedule are anticipated while executing maintenance schedules due to different real-life unforeseen exigencies. Such exigencies are incorporated in terms of near-optimum schedules obtained from hybrid GA/SA technique during the final stages of convergence. Case studies corroborate that same optimum schedules are obtained using GA and hybrid GA/SA for respective deterministic and stochastic formulations. The comparison of results in terms of interval of confidence for LOLP indicates that levelized risk method adequately incorporates the stochastic nature of power system as compared with levelized reserve method. Also the interval of confidence for LOLP denotes the possible risk in a quantified manner and it is of immense use from perspective of captive power plants intended for quality power
Aspects of cell calculations in deterministic reactor core analysis
International Nuclear Information System (INIS)
Varvayanni, M.; Savva, P.; Catsaros, N.
2011-01-01
Τhe capability of achieving optimum utilization of the deterministic neutronic codes is very important, since, although elaborate tools, they are still widely used for nuclear reactor core analyses, due to specific advantages that they present compared to Monte Carlo codes. The user of a deterministic neutronic code system has to make some significant physical assumptions if correct results are to be obtained. A decisive first step at which such assumptions are required is the one-dimensional cell calculations, which provide the neutronic properties of the homogenized core cells and collapse the cross sections into user-defined energy groups. One of the most crucial determinations required at the above stage and significantly influencing the subsequent three-dimensional calculations of reactivity, concerns the transverse leakages, associated to each one-dimensional, user-defined core cell. For the appropriate definition of the transverse leakages several parameters concerning the core configuration must be taken into account. Moreover, the suitability of the assumptions made for the transverse cell leakages, depends on earlier user decisions, such as those made for the core partition into homogeneous cells. In the present work, the sensitivity of the calculated core reactivity to the determined leakages of the individual cells constituting the core, is studied. Moreover, appropriate assumptions concerning the transverse leakages in the one-dimensional cell calculations are searched out. The study is performed examining also the influence of the core size and the reflector existence, while the effect of the decisions made for the core partition into homogenous cells is investigated. In addition, the effect of broadened moderator channels formed within the core (e.g. by removing fuel plates to create space for control rod hosting) is also examined. Since the study required a large number of conceptual core configurations, experimental data could not be available for
Directory of Open Access Journals (Sweden)
Pascal Schopp
2017-11-01
Full Text Available A major application of genomic prediction (GP in plant breeding is the identification of superior inbred lines within families derived from biparental crosses. When models for various traits were trained within related or unrelated biparental families (BPFs, experimental studies found substantial variation in prediction accuracy (PA, but little is known about the underlying factors. We used SNP marker genotypes of inbred lines from either elite germplasm or landraces of maize (Zea mays L. as parents to generate in silico 300 BPFs of doubled-haploid lines. We analyzed PA within each BPF for 50 simulated polygenic traits, using genomic best linear unbiased prediction (GBLUP models trained with individuals from either full-sib (FSF, half-sib (HSF, or unrelated families (URF for various sizes (Ntrain of the training set and different heritabilities (h2 . In addition, we modified two deterministic equations for forecasting PA to account for inbreeding and genetic variance unexplained by the training set. Averaged across traits, PA was high within FSF (0.41–0.97 with large variation only for Ntrain < 50 and h2 < 0.6. For HSF and URF, PA was on average ∼40–60% lower and varied substantially among different combinations of BPFs used for model training and prediction as well as different traits. As exemplified by HSF results, PA of across-family GP can be very low if causal variants not segregating in the training set account for a sizeable proportion of the genetic variance among predicted individuals. Deterministic equations accurately forecast the PA expected over many traits, yet cannot capture trait-specific deviations. We conclude that model training within BPFs generally yields stable PA, whereas a high level of uncertainty is encountered in across-family GP. Our study shows the extent of variation in PA that must be at least reckoned with in practice and offers a starting point for the design of training sets composed of multiple BPFs.
Variants of Moreau's sweeping process
International Nuclear Information System (INIS)
Siddiqi, A.H.; Manchanda, P.
2001-07-01
In this paper we prove the existence and uniqueness of two variants of Moreau's sweeping process -u'(t) is an element of Nc (t) (u(t)), where in one variant we replace u(t) by u'(t) in the right-hand side of the inclusion and in the second variant u'(t) and u(t) are respectively replaced by u''(t) and u'(t). (author)
International Nuclear Information System (INIS)
Peresan, A.; Vaccari, F.; Panza, G.F.; Zuccolo, E.; Gorshkov, A.
2009-05-01
An integrated neo-deterministic approach to seismic hazard assessment has been developed that combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of seismic ground motion. The integrated approach allows for a time dependent definition of the seismic input, through the routine updating of earthquake predictions. The scenarios of expected ground motion, associated with the alarmed areas, are defined by means of full waveform modeling. A set of neo-deterministic scenarios of ground motion is defined at regional and local scale, thus providing a prioritization tool for timely prevention and mitigation actions. Constraints about the space and time of occurrence of the impending strong earthquakes are provided by three formally defined and globally tested algorithms, which have been developed according to a pattern recognition scheme. Two algorithms, namely CN and M8, are routinely used for intermediate-term middle-range earthquake predictions, while a third algorithm allows for the identification of the areas prone to large events. These independent procedures have been combined to better constrain the alarmed area. The pattern recognition of earthquake-prone areas does not belong to the family of earthquake prediction algorithms since it does not provide any information about the time of occurrence of the expected earthquakes. Nevertheless, it can be considered as the term-less zero-approximation, which restrains the alerted areas (e.g. defined by CN or M8) to the more precise location of large events. Italy is the only region of moderate seismic activity where the two different prediction algorithms CN and M8S (i.e. a spatially stabilized variant of M8) are applied simultaneously and a real-time test of predictions, for earthquakes with magnitude larger than 5.4, is ongoing since 2003. The application of the CN to the Adriatic region (s.l.), which is relevant
Learning FCM by chaotic simulated annealing
International Nuclear Information System (INIS)
Alizadeh, Somayeh; Ghazanfari, Mehdi
2009-01-01
Fuzzy cognitive map (FCM) is a directed graph, which shows the relations between essential components in complex systems. It is a very convenient, simple, and powerful tool, which is used in numerous areas of application. Experts who are familiar with the system components and their relations can generate a related FCM. There is a big gap when human experts cannot produce FCM or even there is no expert to produce the related FCM. Therefore, a new mechanism must be used to bridge this gap. In this paper, a novel learning method is proposed to construct FCM by using Chaotic simulated annealing (CSA). The proposed method not only is able to construct FCM graph topology but also is able to extract the weight of the edges from input historical data. The efficiency of the proposed method is shown via comparison of its results of some numerical examples with those of Simulated annealing (SA) method.
Annealing texture of rolled nickel alloys
International Nuclear Information System (INIS)
Meshchaninov, I.V.; Khayutin, S.G.
1976-01-01
A texture of pure nickel and binary alloys after the 95% rolling and annealing has been studied. Insoluble additives (Mg, Zr) slacken the cubic texture in nickel and neral slackening of the texture (Zr). In the case of alloying with silicium (up to 2%) the texture practically coinsides with that of a technical-grade nickel. The remaining soluble additives either do not change the texture of pure nickel (C, Nb) or enhance the sharpness and intensity of the cubic compontnt (Al, Cu, Mn, Cr, Mo, W, Co -at their content 0.5 to 2.0%). A model is proposed by which variation of the annealing texture upon alloying is caused by dissimilar effect of the alloying elements on the mobility of high- and low-angle grain boundaries
Pyrolytic citrate synthesis and ozone annealing
International Nuclear Information System (INIS)
Celani, F.; Saggese, A.; Giovannella, C.; Messi, R.; Merlo, V.
1988-01-01
A pyrolytic procedure is described that via a citrate synthesis allowed us to obtain very fine grained YBCO powders that, after a first furnace thermal treatment in ozone, results already to contain a large amount of superconducting microcrystals. A second identical thermal treatment gives a final product strongly textured, as shown by magnetic torque measurements. Complementary structural and diamagnetic measurement show the high quality of these sintered pellets. The role covered by both the pyrolytic preparation and the ozone annealing are discussed
Laser annealing of ion implanted silicon
International Nuclear Information System (INIS)
White, C.W.; Appleton, B.R.; Wilson, S.R.
1980-01-01
Pulsed laser annealing of ion implanted silicon leads to the formation of supersaturated alloys by nonequilibrium crystal growth processes at the interface occurring during liquid phase epitaxial regrowth. The interfacial distribution coefficients from the melt (k') and the maximum substitutional solubilities (C/sub s//sup max/) are far greater than equilibrium values. Both K' and C/sub s//sup max/ are functions of growth velocity. Mechanisms limiting substitutional solubilities are discussed. 5 figures, 2 tables
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
A deterministic seismic hazard map of India and adjacent areas
International Nuclear Information System (INIS)
Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano
2001-09-01
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Deterministic methods for multi-control fuel loading optimization
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
International Nuclear Information System (INIS)
Choi, Sun Mi; Kim, Ji Hwan; Seok, Ho
2016-01-01
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Prospects in deterministic three dimensional whole-core transport calculations
International Nuclear Information System (INIS)
Sanchez, Richard
2012-01-01
The point we made in this paper is that, although detailed and precise three-dimensional (3D) whole-core transport calculations may be obtained in the future with massively parallel computers, they would have an application to only some of the problems of the nuclear industry, more precisely those regarding multiphysics or for methodology validation or nuclear safety calculations. On the other hand, typical design reactor cycle calculations comprising many one-point core calculations can have very strict constraints in computing time and will not directly benefit from the advances in computations in large scale computers. Consequently, in this paper we review some of the deterministic 3D transport methods which in the very near future may have potential for industrial applications and, even with low-order approximations such as a low resolution in energy, might represent an advantage as compared with present industrial methodology, for which one of the main approximations is due to power reconstruction. These methods comprise the response-matrix method and methods based on the two-dimensional (2D) method of characteristics, such as the fusion method.
Conversion of dependability deterministic requirements into probabilistic requirements
International Nuclear Information System (INIS)
Bourgade, E.; Le, P.
1993-02-01
This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
Deterministic network interdiction optimization via an evolutionary approach
International Nuclear Information System (INIS)
Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem
Is there a sharp phase transition for deterministic cellular automata?
International Nuclear Information System (INIS)
Wootters, W.K.
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
Energy Technology Data Exchange (ETDEWEB)
Choi, Sun Mi; Kim, Ji Hwan [KHNP Central Research Institute, Daejeon (Korea, Republic of); Seok, Ho [KEPCO Engineering and Construction, Daejeon (Korea, Republic of)
2016-10-15
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics.
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
International Nuclear Information System (INIS)
Quadri, Mohammad I.; Al-Sheikh, Iman H.
2001-01-01
Hairy cell leukaemia variant is a very rare chronic lymphoproliferative disorder and is closely related to hairy cell leukemia. We hereby describe a case of hairy cell leukaemia variant for the first time in Saudi Arabia. An elderly Saudi man presented with pallor, massive splenomegaly, and moderate hepatomegaly. Hemoglobin was 7.7 g/dl, Platelets were 134 x109/l and white blood count was 140x10 9/l with 97% being abnormal lymphoid cells with cytoplasmic projections. The morphology, cytochemistry, and immunophenotype of the lymphoid cells were classical of hairy cell leukaemia variant. The bone marrow was easily aspirated and findings were consistent with hairy cell leukaemia variant. (author)
Annealing effects in solid-state track recorders
International Nuclear Information System (INIS)
Gold, R.; Roberts, J.H.; Ruddy, F.H.
1981-01-01
Current analyses of the annealing process in Solid State Track Recorders (SSTR) reveal fundamental misconceptions. The use of the Arrhenius equation to describe the decrease in track density resulting from annealing is shown to be incorrect. To overcome these deficiencies, generalized reaction rate theory is used to describe the annealing process in SSTR. Results of annealing experiments are used to guide this theoretical formulation. Within this framework, the concept of energy per etchable defect for SSTR is introduced. A general correlation between sensitivity and annealing susceptibility in SSTR is deduced. In terms of this general theory, the apparent correlation between fission track size and fission track density observed under annealing is readily explained. Based on this theoretical treatment of annealing phenomena, qualitative explanations are advanced for current enigmas in SSTR cosmic ray work
Product Variant Master as a Means to Handle Variant Design
DEFF Research Database (Denmark)
Hildre, Hans Petter; Mortensen, Niels Henrik; Andreasen, Mogens Myrup
1996-01-01
be implemented in the CAD system I-DEAS. A precondition for high degree of computer support is identification of a product variant master from which new variants can be derived. This class platform defines how a product build up fit certain production methods and rules governing determination of modules...
Directory of Open Access Journals (Sweden)
Seyed Jalal Younesi
2015-06-01
Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.
Deterministic one-way simulation of two-way, real-time cellular automata and its related problems
Energy Technology Data Exchange (ETDEWEB)
Umeo, H; Morita, K; Sugata, K
1982-06-13
The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.
Anti-deterministic behaviour of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.
2005-05-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
International Nuclear Information System (INIS)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua
2010-01-01
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
Energy Technology Data Exchange (ETDEWEB)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com
2010-04-15
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Deterministic and heuristic models of forecasting spare parts demand
Directory of Open Access Journals (Sweden)
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Activity modes selection for project crashing through deterministic simulation
Directory of Open Access Journals (Sweden)
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Radiation damage annealing mechanisms and possible low temperature annealing in silicon solar cells
Weinberg, I.; Swartz, C. K.
1980-01-01
Deep level transient spectroscopy and the Shockley-Read-Hall recombination theory are used to identify the defect responsible for reverse annealing in 2 ohm-cm n+/p silicon solar cells. This defect, with energy level at Ev + 0.30 eV, has been tentatively identified as a boron-oxygen-vacancy complex. It has been also determined by calculation that the removal of this defect could result in significant annealing at temperatures as low as 200 C for 2 ohm-cm and lower resistivity cells.
Hierarchical Network Design Using Simulated Annealing
DEFF Research Database (Denmark)
Thomadsen, Tommy; Clausen, Jens
2002-01-01
networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....
Annealing relaxation of ultrasmall gold nanostructures
Chaban, Vitaly
2015-01-01
Except serving as an excellent gift on proper occasions, gold finds applications in life sciences, particularly in diagnostics and therapeutics. These applications were made possible by gold nanoparticles, which differ drastically from macroscopic gold. Versatile surface chemistry of gold nanoparticles allows coating with small molecules, polymers, biological recognition molecules. Theoretical investigation of nanoscale gold is not trivial, because of numerous metastable states in these systems. Unlike elsewhere, this work obtains equilibrium structures using annealing simulations within the recently introduced PM7-MD method. Geometries of the ultrasmall gold nanostructures with chalcogen coverage are described at finite temperature, for the first time.
Binary Sparse Phase Retrieval via Simulated Annealing
Directory of Open Access Journals (Sweden)
Wei Peng
2016-01-01
Full Text Available This paper presents the Simulated Annealing Sparse PhAse Recovery (SASPAR algorithm for reconstructing sparse binary signals from their phaseless magnitudes of the Fourier transform. The greedy strategy version is also proposed for a comparison, which is a parameter-free algorithm. Sufficient numeric simulations indicate that our method is quite effective and suggest the binary model is robust. The SASPAR algorithm seems competitive to the existing methods for its efficiency and high recovery rate even with fewer Fourier measurements.
Study of rolled uranium annealing process
International Nuclear Information System (INIS)
Cabane, G.
1954-06-01
The dilatometric study of rolled uranium clearly shows not only the expansions or contractions induced by stress relief or diffusion of vacancies, but also the slope variations of the cooling curves, which are the best evidence of a texture change. Under the microscope, hard-rolled sheets appear as a mixture of two distinct structures; it is also possible by intermediate annealing to prepare homogeneous sheets of either structure, i.e. twinned or untwinned. All these sheets which have similar textures, undergo at first a primary recrystallization beginning at 320 deg C, then a texture change without any apparent crystal growth, at about 430 deg C. (author) [fr
Simulated annealing for tensor network states
International Nuclear Information System (INIS)
Iblisdir, S
2014-01-01
Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)
Fourier-transforming with quantum annealers
Directory of Open Access Journals (Sweden)
Itay eHen
2014-07-01
Full Text Available We introduce a set of quantum adiabatic evolutions that we argue may be used as `building blocks', or subroutines, in the onstruction of an adiabatic algorithm that executes Quantum Fourier Transform (QFT with the same complexity and resources as its gate-model counterpart. One implication of the above construction is the theoretical feasibility of implementing Shor's algorithm for integer factorization in an optimal manner, and any other algorithm that makes use of QFT, on quantum annealing devices. We discuss the possible advantages, as well as the limitations, of the proposed approach as well as its relation to traditional adiabatic quantum computation.
International Nuclear Information System (INIS)
Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long
2012-01-01
Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.
Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region
International Nuclear Information System (INIS)
Panza, G.F.; Vaccari, F.; Kouteva, M.
2008-03-01
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...
Implemented state automorphisms within the logico-algebraic approach to deterministic mechanics
Energy Technology Data Exchange (ETDEWEB)
Barone, F [Naples Univ. (Italy). Ist. di Matematica della Facolta di Scienze
1981-01-31
The new notion of S/sub 1/-implemented state automorphism is introduced and characterized in quantum logic. Implemented pure state automorphisms are then characterized in deterministic mechanics as automorphisms of the Borel structure on the phase space.
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
Daciuk, J; Champarnaud, JM; Maurel, D
2003-01-01
This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.
Handbook of EOQ inventory problems stochastic and deterministic models and applications
Choi, Tsan-Ming
2013-01-01
This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.
National Research Council Canada - National Science Library
Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank
2008-01-01
Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...
DEFF Research Database (Denmark)
Sousa, Tiago; Vale, Zita; Carvalho, Joao Paulo
2014-01-01
The massification of electric vehicles (EVs) can have a significant impact on the power system, requiring a new approach for the energy resource management. The energy resource management has the objective to obtain the optimal scheduling of the available resources considering distributed...... to determine the best solution in a reasonable amount of time. This paper presents a hybrid artificial intelligence technique to solve a complex energy resource management problem with a large number of resources, including EVs, connected to the electric network. The hybrid approach combines simulated...... annealing (SA) and ant colony optimization (ACO) techniques. The case study concerns different EVs penetration levels. Comparisons with a previous SA approach and a deterministic technique are also presented. For 2000 EVs scenario, the proposed hybrid approach found a solution better than the previous SA...
A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting
Raboudi, Naila
2016-11-01
The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated
Annealing-induced Ge/Si(100) island evolution
International Nuclear Information System (INIS)
Zhang Yangting; Drucker, Jeff
2003-01-01
Ge/Si(100) islands were found to coarsen during in situ annealing at growth temperature. Islands were grown by molecular-beam epitaxy of pure Ge and annealed at substrate temperatures of T=450, 550, 600, and 650 deg. C, with Ge coverages of 6.5, 8.0, and 9.5 monolayers. Three coarsening mechanisms operate in this temperature range: wetting-layer consumption, conventional Ostwald ripening, and Si interdiffusion. For samples grown and annealed at T=450 deg. C, consumption of a metastably thick wetting layer causes rapid initial coarsening. Slower coarsening at longer annealing times occurs by conventional Ostwald ripening. Coarsening of samples grown and annealed at T=550 deg. C occurs via a combination of Si interdiffusion and conventional Ostwald ripening. For samples grown and annealed at T≥600 deg. C, Ostwald ripening of SiGe alloy clusters appears to be the dominant coarsening mechanism
Study on thermal annealing of cadmium zinc telluride (CZT) crystals
International Nuclear Information System (INIS)
Yang, G.; Bolotnikov, A.E.; Fochuk, P.M.; Camarda, G.S.; Cui, Y.; Hossain, A.; Kim, K.; Horace, J.; McCall, B.; Gul, R.; Xu, L.; Kopach, O.V.; James, R.B.
2010-01-01
Cadmium Zinc Telluride (CZT) has attracted increasing interest with its promising potential as a room-temperature nuclear-radiation-detector material. However, different defects in CZT crystals, especially Te inclusions and dislocations, can degrade the performance of CZT detectors. Post-growth annealing is a good approach potentially to eliminate the deleterious influence of these defects. At Brookhaven National Laboratory (BNL), we built up different facilities for investigating post-growth annealing of CZT. Here, we report our latest experimental results. Cd-vapor annealing reduces the density of Te inclusions, while large temperature gradient promotes the migration of small-size Te inclusions. Simultaneously, the annealing lowers the density of dislocations. However, only-Cd-vapor annealing decreases the resistivity, possibly reflecting the introduction of extra Cd in the lattice. Subsequent Te-vapor annealing is needed to ensure the recovery of the resistivity after removing the Te inclusions.
Preparation and Thermal Characterization of Annealed Gold Coated Porous Silicon
Directory of Open Access Journals (Sweden)
Afarin Bahrami
2012-01-01
Full Text Available Porous silicon (PSi layers were formed on a p-type Si wafer. Six samples were anodised electrically with a 30 mA/cm2 fixed current density for different etching times. The samples were coated with a 50–60 nm gold layer and annealed at different temperatures under Ar flow. The morphology of the layers, before and after annealing, formed by this method was investigated by scanning electron microscopy (SEM. Photoacoustic spectroscopy (PAS measurements were carried out to measure the thermal diffusivity (TD of the PSi and Au/PSi samples. For the Au/PSi samples, the thermal diffusivity was measured before and after annealing to study the effect of annealing. Also to study the aging effect, a comparison was made between freshly annealed samples and samples 30 days after annealing.
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Phase conjugation with random fields and with deterministic and random scatterers
International Nuclear Information System (INIS)
Gbur, G.; Wolf, E.
1999-01-01
The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. copyright 1999 Optical Society of America
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Directory of Open Access Journals (Sweden)
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
Deterministic Modeling of the High Temperature Test Reactor
International Nuclear Information System (INIS)
Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.
2010-01-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control
The changes of ADI structure during high temperature annealing
A. Krzyńska; M. Kaczorowski
2010-01-01
The results of structure investigations of ADI during it was annealing at elevated temperature are presented. Ductile iron austempered at temperature 325oC was then isothermally annealed 360 minutes at temperature 400, 450, 500 and 550oC. The structure investigations showed that annealing at these temperatures caused substantial structure changes and thus essential hardness decrease, which is most useful property of ADI from point of view its practical application. Degradation advance of the ...
Hydration-annealing of chemical radiation damage in calcium nitrate
International Nuclear Information System (INIS)
Nair, S.M.K.; James, C.
1984-01-01
The effect of hydration on the annealing of chemical radiation damage in anhydrous calcium nitrate has been investigated. Rehydration of the anhydrous irradiated nitrate induces direct recovery of the damage. The rehydrated salt is susceptible to thermal annealing but the extent of annealing is small compared to that in the anhydrous salt. The direct recovery of damage on rehydration is due to enhanced lattice mobility. The recovery process is unimolecular. (author)
Implantation annealing in GaAs by incoherent light
International Nuclear Information System (INIS)
Davies, D.E.; Ryan, T.G.; Soda, K.J.; Comer, J.J.
1983-01-01
Implanted GaAs has been successfully activated through concentrating the output of quartz halogen lamps to anneal in times of the order of 1 sec. The resulting layers are not restricted by the reduced mobilities and thermal instabilities of laser annealed GaAs. Better activation can be obtained than with furnace annealing but this generally requires maximum temperatures >= 1050degC. (author)
Temperature distribution study in flash-annealed amorphous ribbons
International Nuclear Information System (INIS)
Moron, C.; Garcia, A.; Carracedo, M.T.
2003-01-01
Negative magnetrostrictive amorphous ribbons have been locally current annealed with currents from 1 to 8 A and annealing times from 14 ms to 200 s. In order to obtain information about the sample temperature during flash or current annealing, a study of the temperature dispersion during annealing in amorphous ribbons was made. The local temperature variation was obtained by measuring the local intensity of the infrared emission of the sample with a CCD liquid nitrogen cooled camera. A distribution of local temperature has been found in spite of the small dimension of the sample
Burst annealing of high temperature GaAs solar cells
Brothers, P. R.; Horne, W. E.
1991-01-01
One of the major limitations of solar cells in space power systems is their vulnerability to radiation damage. One solution to this problem is to periodically heat the cells to anneal the radiation damage. Annealing was demonstrated with silicon cells. The obstacle to annealing of GaAs cells was their susceptibility to thermal damage at the temperatures required to completely anneal the radiation damage. GaAs cells with high temperature contacts and encapsulation were developed. The cells tested are designed for concentrator use at 30 suns AMO. The circular active area is 2.5 mm in diameter for an area of 0.05 sq cm. Typical one sun AMO efficiency of these cells is over 18 percent. The cells were demonstrated to be resistant to damage after thermal excursions in excess of 600 C. This high temperature tolerance should allow these cells to survive the annealing of radiation damage. A limited set of experiments were devised to investigate the feasibility of annealing these high temperature cells. The effect of repeated cycles of electron and proton irradiation was tested. The damage mechanisms were analyzed. Limitations in annealing recovery suggested improvements in cell design for more complete recovery. These preliminary experiments also indicate the need for further study to isolate damage mechanisms. The primary objective of the experiments was to demonstrate and quantify the annealing behavior of high temperature GaAs cells. Secondary objectives were to measure the radiation degradation and to determine the effect of repeated irradiation and anneal cycles.
Burst annealing of high temperature GaAs solar cells
International Nuclear Information System (INIS)
Brothers, P.R.; Horne, W.E.
1991-01-01
One of the major limitations of solar cells in space power systems is their vulnerability to radiation damage. One solution to this problem is to periodically heat the cells to anneal the radiation damage. Annealing was demonstrated with silicon cells. The obstacle to annealing of GaAs cells was their susceptibility to thermal damage at the temperatures required to completely anneal the radiation damage. GaAs cells with high temperature contacts and encapsulation were developed. The cells tested are designed for concentrator use at 30 suns AMO. The circular active area is 2.5 mm in diameter for an area of 0.05 sq cm. Typical one sun AMO efficiency of these cells is over 18 percent. The cells were demonstrated to be resistant to damage after thermal excursions in excess of 600 degree C. This high temperature tolerance should allow these cells to survive the annealing of radiation damage. A limited set of experiments were devised to investigate the feasibility of annealing these high temperature cells. The effect of repeated cycles of electron and proton irradiation was tested. The damage mechanisms were analyzed. Limitations in annealing recovery suggested improvements in cell design for more complete recovery. These preliminary experiments also indicate the need for further study to isolate damage mechanisms. The primary objective of the experiments was to demonstrate and quantify the annealing behavior of high temperature GaAs cells. Secondary objectives were to measure the radiation degradation and to determine the effect of repeated irradiation and anneal cycles
Dosimetric characteristics of muscovite mineral studied under different annealing conditions
International Nuclear Information System (INIS)
Kalita, J M; Wary, G
2015-01-01
The annealing effect on the thermoluminescence (TL) characteristics of x-ray irradiated muscovite mineral relevant to dosimetry has been studied. For un-annealed and 473 K annealed samples an isolated TL peak has been observed at around 347 K; however, annealing at 573, 673 and 773 K two composite peaks have been recorded at around 347 and 408 K. Kinetic analysis reveals that there is a trap level at a depth of 0.71 eV, and due to annealing at 573 K (or above), a new trap level generates at 1.23 eV. The dosimetric characteristics, such as dose response, fading and reproducibility, have been studied in detail for all types of samples. The highest linear dose response has been observed from 10 to 2000 mGy in the 773 K annealed sample. Due to generation of the deep trap level, fading is found to reduce significantly just after annealing above 573 K. Reproducibility analysis shows that after 10 cycles of reuse the coefficient of variations in the results for 60, 180 and 1000 mGy dose irradiated 773 K annealed samples are found to be 1.78%, 1.37% and 1.58%, respectively. These analyses demand that after proper annealing muscovite shows important dosimetric features that are essentially required for a thermoluminescence dosimeter (TLD). (paper)
Annealing behavior of alpha recoil tracks in phlogopite
International Nuclear Information System (INIS)
Gao Shaokai; Yuan Wanming; Dong Jinquan; Bao Zengkuan
2005-01-01
Alpha recoil tracks (ARTs) formed during the a-decay of U, Th as well as their daughter nuclei are used as a new dating method which is to some extent a complementarity of fission track dating due to its ability to determine the age of young mineral. ARTs can be observable under phase-contrast interference microscope through chemical etching. In order to study the annealing behavior of ARTs in phlogopite, two methods of annealing experiments were executed. Samples were annealed in the electronic tube furnace at different temperatures ranging from 250 degree C to 450 degree C in steps of 50 degree C. For any given annealing temperature, different annealing times were used until total track fading were achieved. It is found that ARTs anneal much more easily than fission tracks, the annealing ratio increase non-linearly with annealing time and temperature. Using the Arrhenius plot, an activation energy of 0.68ev is finally found for 100% removal of ARTs, which is less than the corresponding value for fission tracks (FTs). Through extending the annealing time to geological time, a much lower temperature range of the sample's cooling history can be got.
Effects of Thermal Annealing Conditions on Cupric Oxide Thin Film
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyo Seon; Oh, Hee-bong; Ryu, Hyukhyun [Inje University, Gimhae (Korea, Republic of); Lee, Won-Jae [Dong-Eui University, Busan (Korea, Republic of)
2015-07-15
In this study, cupric oxide (CuO) thin films were grown on fluorine doped tin oxide(FTO) substrate by using spin coating method. We investigated the effects of thermal annealing temperature and thermal annealing duration on the morphological, structural, optical and photoelectrochemical properties of the CuO film. From the results, we could find that the morphologies, grain sizes, crystallinity and photoelectrochemical properties were dependent on the annealing conditions. As a result, the maximum photocurrent density of -1.47 mA/cm{sup 2} (vs. SCE) was obtained from the sample with the thermal annealing conditions of 500 ℃ and 40 min.
Plasticity margin recovery during annealing after cold deformation
International Nuclear Information System (INIS)
Bogatov, A.A.; Smirnov, S.V.; Kolmogorov, V.L.
1978-01-01
Restoration of the plasticity margin in steel 20 after cold deformation and annealing at 550 - 750 C and soaking for 5 - 300 min was investigated. The conditions of cold deformation under which the metal acquires microdefects unhealed by subsequent annealing were determined. It was established that if the degree of utilization of the plasticity margin is psi < 0.5, the plasticity margin in steel 20 can be completely restored by annealing. A mathematical model of restoration of the plasticity margin by annealing after cold deformation was constructed. A statistical analysis showed good agreement between model and experiment
Mean Field Analysis of Quantum Annealing Correction.
Matsuura, Shunji; Nishimori, Hidetoshi; Albash, Tameem; Lidar, Daniel A
2016-06-03
Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p≥3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.
Ballistic self-annealing during ion implantation
International Nuclear Information System (INIS)
Prins, Johan F.
2001-01-01
Ion implantation conditions are considered during which the energy, dissipated in the collision cascades, is low enough to ensure that the defects, which are generated during these collisions, consist primarily of vacancies and interstitial atoms. It is proposed that ballistic self-annealing is possible when the point defect density becomes high enough, provided that none, or very few, of the interstitial atoms escape from the layer being implanted. Under these conditions, the fraction of ballistic atoms, generated within the collision cascades from substitutional sites, decreases with increasing ion dose. Furthermore, the fraction of ballistic atoms, which finally end up within vacancies, increases with increasing vacancy density. Provided the crystal structure does not collapse, a damage threshold should be approached where just as many atoms are knocked out of substitutional sites as the number of ballistic atoms that fall back into vacancies. Under these conditions, the average point defect density should approach saturation. This model is applied to recently published Raman data that have been measured on a 3 MeV He + -ion implanted diamond (Orwa et al 2000 Phys. Rev. B 62 5461). The conclusion is reached that this ballistic self-annealing model describes the latter data better than a model in which it is assumed that the saturation in radiation damage is caused by amorphization of the implanted layer. (author)
MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING
Directory of Open Access Journals (Sweden)
Ladislav Rosocha
2015-07-01
Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.
Magnetic field annealing for improved creep resistance
Brady, Michael P.; Ludtka, Gail M.; Ludtka, Gerard M.; Muralidharan, Govindarajan; Nicholson, Don M.; Rios, Orlando; Yamamoto, Yukinori
2015-12-22
The method provides heat-resistant chromia- or alumina-forming Fe-, Fe(Ni), Ni(Fe), or Ni-based alloys having improved creep resistance. A precursor is provided containing preselected constituents of a chromia- or alumina-forming Fe-, Fe(Ni), Ni(Fe), or Ni-based alloy, at least one of the constituents for forming a nanoscale precipitate MaXb where M is Cr, Nb, Ti, V, Zr, or Hf, individually and in combination, and X is C, N, O, B, individually and in combination, a=1 to 23 and b=1 to 6. The precursor is annealed at a temperature of 1000-1500.degree. C. for 1-48 h in the presence of a magnetic field of at least 5 Tesla to enhance supersaturation of the M.sub.aX.sub.b constituents in the annealed precursor. This forms nanoscale M.sub.aX.sub.b precipitates for improved creep resistance when the alloy is used at service temperatures of 500-1000.degree. C. Alloys having improved creep resistance are also disclosed.
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Mechanical behavior of multipass welded joint during stress relief annealing
International Nuclear Information System (INIS)
Ueda, Yukio; Fukuda, Keiji; Nakacho, Keiji; Takahashi, Eiji; Sakamoto, Koichi.
1978-01-01
An investigation into mechanical behavior of a multipass welded joint of a pressure vessel during stress relief annealing was conducted. The study was performed theoretically and experimentally on idealized research models. In the theoretical analysis, the thermal elastic-plastic creep theory developed by the authors was applied. The behavior of multipass welded joints during the entire thermal cycle, from welding to stress relief annealing, was consistently analyzed by this theory. The results of the analysis show a good, fundamentally coincidence with the experimental findings. The outline of the results and conclusions is as follows. (1) In the case of the material (2 1/4Cr-1Mo steel) furnished in this study, the creep strain rate during stress relief annealing below 575 0 C obeys the strain-hardening creep law using the transient creep and the one above 575 0 C obeys the power creep law using the stational creep. (2) In the transverse residual stress (σsub(x)) distribution after annealing, the location of the largest tensile stress on the top surface is about 15 mm away from the toe of weld, and the largest at the cross section is just below the finishing bead. These features are similar to those of welding residual stresses. But the stress distribution after annealing is smoother than one from welding. (3) The effectiveness of stress relief annealing depends greatly on the annealing temperature. For example, most of residual stresses are relieved at the heating stage with a heating rate of 30 0 C/hr. to 100 0 C/hr. if the annealing temperature is 650 0 C, but if the annealing temperature is 550 0 C, the annealing is not effective even with a longer holding time. (4) In the case of multipass welding residual stresses studied in this paper, the behaviors of high stresses during annealing are approximated by ones during anisothermal relaxation. (auth.)
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
Thermoelectric properties by high temperature annealing
Ren, Zhifeng (Inventor); Chen, Gang (Inventor); Kumar, Shankar (Inventor); Lee, Hohyun (Inventor)
2009-01-01
The present invention generally provides methods of improving thermoelectric properties of alloys by subjecting them to one or more high temperature annealing steps, performed at temperatures at which the alloys exhibit a mixed solid/liquid phase, followed by cooling steps. For example, in one aspect, such a method of the invention can include subjecting an alloy sample to a temperature that is sufficiently elevated to cause partial melting of at least some of the grains. The sample can then be cooled so as to solidify the melted grain portions such that each solidified grain portion exhibits an average chemical composition, characterized by a relative concentration of elements forming the alloy, that is different than that of the remainder of the grain.
Coupled Quantum Fluctuations and Quantum Annealing
Hormozi, Layla; Kerman, Jamie
We study the relative effectiveness of coupled quantum fluctuations, compared to single spin fluctuations, in the performance of quantum annealing. We focus on problem Hamiltonians resembling the the Sherrington-Kirkpatrick model of Ising spin glass and compare the effectiveness of different types of fluctuations by numerically calculating the relative success probabilities and residual energies in fully-connected spin systems. We find that for a small class of instances coupled fluctuations can provide improvement over single spin fluctuations and analyze the properties of the corresponding class. Disclaimer: This research was funded by ODNI, IARPA via MIT Lincoln Laboratory under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.
Angular filter refractometry analysis using simulated annealing.
Angland, P; Haberberger, D; Ivancic, S T; Froula, D H
2017-10-01
Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.
Quenching and annealing in the minority game
Burgos, E.; Ceva, Horacio; Perazzo, R. P. J.
2001-05-01
We study the bar attendance model (BAM) and a generalized version of the minority game (MG) in which a number of agents self organize to match an attendance that is fixed externally as a control parameter. We compare the probabilistic dynamics used in the MG with one that we introduce for the BAM that makes better use of the same available information. The relaxation dynamics of the MG leads the system to long lived, metastable (quenched) configurations in which adaptive evolution stops in spite of being far from equilibrium. On the contrary, the BAM relaxation dynamics avoids the MG glassy state, leading to an equilibrium configuration. Finally, we introduce in the MG model the concept of annealing by defining a new procedure with which one can gradually overcome the metastable MG states, bringing the system to an equilibrium that coincides with the one obtained with the BAM.
Annealing effects on cathodoluminescence of zircon
Tsuchiya, Y.; Nishido, H.; Noumi, Y.
2011-12-01
U-Pb zircon dating (e. g., SHRIMP) is an important tool to interpret a history of the minerals at a micrometer-scale, where cathodoluminescence (CL) imaging allows us to recognize internal zones and domains with different chemical compositions and structural disorder at high spatial resolution. The CL of zircon is attributed by various types of emission centers, which are extrinsic ones such as REE impurities and intrinsic ones such as structural defects. Metamictization resulted from radiation damage to the lattice by alpha particles from the decay of U and Th mostly causes an effect on the CL features of zircon as a defect center. However, slightly radiation-damaged zircon, which is almost nondetectable by XRD, has not been characterized using CL method. In this study, annealing effects on CL of zircon has been investigated to clarify a recovery process of the damaged lattice at low radiation dose. A single crystal of zircon from Malawi was selected for CL measurements. It contains HfO2: 2.30 w.t %, U: 241 ppm and Th: 177 ppm. Two plate samples perpendicular to c and a axes were prepared for annealing experiments during 12 hours from room temperature to 1400 degree C. Color CL images were captured using a cold-cathode microscope (Luminoscope: Nuclide ELM-3R). CL spectral measurements were conducted using an SEM (JEOL: JSM-5410) combined with a grating monochromator (Oxford: Mono CL2) to measure CL spectra ranging from 300 to 800 nm in 1 nm steps with a temperature controlled stage. The dispersed CL was collected by a photoncounting method using a photomultiplier tube (Hamamatsu: R2228) and converted to digital data. All CL spectra were corrected for the total instrumental response. Spectral analysis reveals an anisotropy of the CL emission bands related to intrinsic defect center in blue region, radiation-induced defect center from 500 to 700 nm, and trivalent Dy impurity center at 480 and 580 nm, but their relative intensities are almost constant. CL on the
PENJADWALAN FLOWSHOP DENGAN MENGGUNAKAN SIMULATED ANNEALING
Directory of Open Access Journals (Sweden)
Muhammad Firdaus
2015-04-01
Full Text Available This article apply a machine scheduling technique, named Simulate Annealing (SA to schedule 8 jobs and 5 machines to minimize makespan. A flowshop production flow is chosen as a case study to collect data and attempted to reduce jobs’ makespan. This article also does a sensitivity analysis to explore the implication of the changes of SA parameters as temperature. The results shows that the completion time of the jobs uses SA algoritm can decrease the completion time of the jobs, about 5 hours lower than the existing method. Moreover, total idle time of the machines is also reduced by 2.18 per cent using SA technique. Based on the sensitivity analysis, it indicates that there is a significant relationship between the changes of temperatures and makespan and computation time.
Annealed n-vector p spin model
International Nuclear Information System (INIS)
Taucher, T.; Frankel, N.E.
1992-01-01
A disordered n-vector model with p spin interactions is introduced and studied in mean field theory for the annealed case. The complete solutions for the cases n = 2 and n = 3, is presented and explicit order parameter equations is given for all the stable solutions for arbitrary n. For all n and p was found on stable high temperature phase and one stable low temperature phase. The phase transition is of first order. For n = 2, it is continuous in the order parameters for p ≤ 4 and has a jump discontinuity in the order parameters if p > 4. For n = 3, it has a jump discontinuity in the order parameters for all p. 11 refs., 4 figs
Note: A wide temperature range MOKE system with annealing capability.
Chahil, Narpinder Singh; Mankey, G J
2017-07-01
A novel sample stage integrated with a longitudinal MOKE system has been developed for wide temperature range measurements and annealing capabilities in the temperature range 65 K temperatures without adversely affecting the cryostat and minimizes thermal drift in position. In this system the hysteresis loops of magnetic samples can be measured simultaneously while annealing the sample in a magnetic field.
Stored energy and annealing behavior of heavily deformed aluminium
DEFF Research Database (Denmark)
Kamikawa, Naoya; Huang, Xiaoxu; Kondo, Yuka
2012-01-01
It has been demonstrated in previous work that a two-step annealing treatment, including a low-temperature, long-time annealing and a subsequent high-temperature annealing, is a promising route to control the microstructure of a heavily deformed metal. In the present study, structural parameters...... are quantified such as boundary spacing, misorientation angle and dislocation density for 99.99% aluminium deformed by accumulative roll-bonding to a strain of 4.8. Two different annealing processes have been applied; (i) one-step annealing for 0.5 h at 100-400°C and (ii) two-step annealing for 6 h at 175°C...... followed by 0.5 h annealing at 200-600°C, where the former treatment leads to discontinuous recrystallization and the latter to uniform structural coarsening. This behavior has been analyzed in terms of the relative change during annealing of energy stored as elastic energy in the dislocation structure...
Annealing behavior of solution grown polyethylene single crystals
Loos, J.; Tian, M.
2006-01-01
The morphology evolution of solution grown polyethylene single crystals has been studied upon annealing below their melting temperature by using atomic force microscopy (AFM). AFM investigations have been performed ex situ, which means AFM investigations at room temperature after the annealing
Principal and secondary luminescence lifetime components in annealed natural quartz
International Nuclear Information System (INIS)
Chithambo, M.L.; Ogundare, F.O.; Feathers, J.
2008-01-01
Time-resolved luminescence spectra from quartz can be separated into components with distinct principal and secondary lifetimes depending on certain combinations of annealing and measurement temperature. The influence of annealing on properties of the lifetimes related to irradiation dose and temperature of measurement has been investigated in sedimentary quartz annealed at various temperatures up to 900 deg. C. Time-resolved luminescence for use in the analysis was pulse stimulated from samples at 470 nm between 20 and 200 deg. C. Luminescence lifetimes decrease with measurement temperature due to increasing thermal effect on the associated luminescence with an activation energy of thermal quenching equal to 0.68±0.01eV for the secondary lifetime but only qualitatively so for the principal lifetime component. Concerning the influence of annealing temperature, luminescence lifetimes measured at 20 deg. C are constant at about 33μs for annealing temperatures up to 600 0 C but decrease to about 29μs when the annealing temperature is increased to 900 deg. C. In addition, it was found that lifetime components in samples annealed at 800 deg. C are independent of radiation dose in the range 85-1340 Gy investigated. The dependence of lifetimes on both the annealing temperature and magnitude of radiation dose is described as being due to the increasing importance of a particular recombination centre in the luminescence emission process as a result of dynamic hole transfer between non-radiative and radiative luminescence centres
Annealing of KDP crystals in vacuum and under pressure
International Nuclear Information System (INIS)
Pritula, I.M.; Kolybayeva, M.I.; Salo, V.I.
1997-01-01
The effect of the high temperature annealing (T an > 230 degrees C) on the absorption spectra and laser damage threshold of KDP crystals was studied in the present paper. The experiments on isotermal annealing were performed under pressure in the atmosphere with specific properties. The composition of the atmosphere was selected to be chose to that of the desorbing gas component determined during annealing in vacuum. The mentioned conditions allowed to conduct annealing in the temperature range of 230 - 280 degrees C without degradation of the sample. The variations in the absorption spectra showed that the effect of the annealing is most strongly revealed in the short - wave region of the spectrum (λ -1 before and k=0.12 cm -1 after annealing) demonstrate that at temperatures ∼ 230 - 280 degrees C the processes ensuring the improvement of the structure quality are stimulated in the volume of the crystals: (a) before the annealing laser damage threshold was 1.5 10 11 W/cm 2 ; (b) after the annealing (t = 280 degrees C) it became 4 10 11 W/cm 2
Effect of heat moisture treatment and annealing on physicochemical ...
African Journals Online (AJOL)
Red sorghum starch was physically modified by annealing and heat moisture treatment. The swelling power and solubility increased with increasing temperature range (60-90°), while annealing and heatmoisture treatment decreased swelling power and solubility of starch. Solubility and swelling were pH dependent with ...
Nitrogen annealing of zirconium or titanium metals and their alloys
International Nuclear Information System (INIS)
Eucken, C.M.
1982-01-01
A method is described of continuously nitrogen annealing zirconium and titanium metals and their alloys at temperatures at from 525 0 to 875 0 C for from 1/2 minute to 15 minutes. The examples include the annealing of Zircaloy-4. (U.K.)
Application of annealing for extension of WWER vessel lives
International Nuclear Information System (INIS)
Badanin, V.; Dragunow, Yu.G.; Fedorov, V.; Gorynin, I.; Nickolaev, V.
1992-01-01
The safe operation of nuclear power plants (NPP) is dependent upon the assurance that the reactor pressure vessel will not fail in a brittle manner when the effects of radiation embrittlement are taken into account. The recovery of the properties of the irradiated materials is an important way of extending the operating life of a reactor vessel. The intent of this paper is to demonstrate the efficiency of thermal annealing for the recovery of reactor vessel material properties and to present the implications for extended service life. In order to substantiate the application of annealing to the extensior of the service life of vessels, detailed investigations were conducted which involved thermal annealing temperature and time, fast neutron fluence, and metallurgical factors (i.e. impurity contents) on the recovery of properties after the annealing of irradiated materials. Similar studies were continued to determine predictive methods for radiation embrittlement after repeated annealings. In May 1987 the first pilot annealing of a commercial reactor vessel (Novo-Voronezhskaya, III, NPP) was performed. The development of the annealing equipment and investigations performed to test the annealing process proved successful, and an improved safe operation for the reactor vessel was thus atttained providing for an extended service life. (orig.)
Susceptor and proximity rapid thermal annealing of InP
International Nuclear Information System (INIS)
Katz, A.; Pearton, S.J.; Geva, M.
1990-01-01
This paper presents a comparison between the efficiency of InP rapid thermal annealing within two types of SiC-coated graphite susceptors and by using the more conventional proximity approach, in providing degradation-free substrate surface morphology. The superiority of annealing within a susceptor was clearly demonstrated through the evaluation of AuGe contact performance to carbon-implanted InP substrates, which were annealed to activate the implants prior to the metallization. The susceptor annealing provided better protection against edge degradation, slip formation and better surface morphology, due to the elimination of P outdiffusion and pit formation. The two SiC-coated susceptors that were evaluated differ from each other in their geometry. The first type must be charged with the group V species prior to any annealing cycle. Under the optimum charging conditions, effective surface protection was provided only to one anneal (750 degrees C, 10s) of InP before charging was necessary. The second contained reservoirs for provision of the group V element partial pressure, enabled high temperature annealing at the InP without the need for continual recharging of the susceptor. Thus, one has the ability to subsequentially anneal a lot of InP wafers at high temperatures without inducing any surface deterioration
Response of neutron-irradiated RPV steels to thermal annealing
International Nuclear Information System (INIS)
Iskander, S.K.; Sokolov, M.A.; Nanstad, R.K.
1997-01-01
One of the options to mitigate the effects of irradiation on reactor pressure vessels (RPVs) is to thermally anneal them to restore the fracture toughness properties that have been degraded by neutron irradiation. This paper summarizes experimental results of work performed at the Oak Ridge National Laboratory (ORNL) to study the annealing response of several irradiated RPV steels
Improved perovskite phototransistor prepared using multi-step annealing method
Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan
2018-02-01
Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.
Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B
International Nuclear Information System (INIS)
Killeen, J.C.
1997-01-01
A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12μm grain size fuel. Both large grain size variants had similar grain sizes around 35μm. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of 85 Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs
Fission gas release during post irradiation annealing of large grain size fuels from Hinkley point B
Energy Technology Data Exchange (ETDEWEB)
Killeen, J C [Nuclear Electric plc, Barnwood (United Kingdom)
1997-08-01
A series of post-irradiation anneals has been carried out on fuel taken from an experimental stringer from Hinkley Point B AGR. The stringer was part of an experimental programme in the reactor to study the effect of large grain size fuel. Three differing fuel types were present in separate pins in the stringer. One variant of large grain size fuel had been prepared by using an MgO dopant during fuel manufactured, a second by high temperature sintering of standard fuel and the third was a reference, 12{mu}m grain size fuel. Both large grain size variants had similar grain sizes around 35{mu}m. The present experiments took fuel samples from highly rated pins from the stringer with local burn-up in excess of 25GWd/tU and annealed these to temperature of up to 1535 deg. C under reducing conditions to allow a comparison of fission gas behaviour at high release levels. The results demonstrate the beneficial effect of large grain size on release rate of {sup 85}Kr following interlinkage. At low temperatures and release rates there was no difference between the fuel types, but at temperatures in excess of 1400 deg. C the release rate was found to be inversely dependent on the fuel grain size. The experiments showed some differences between the doped and undoped large grains size fuel in that the former became interlinked at a lower temperature, releasing fission gas at an increased rate at this temperature. At higher temperatures the grain size effect was dominant. The temperature dependence for fission gas release was determined over a narrow range of temperature and found to be similar for all three types and for both pre-interlinkage and post-interlinkage releases, the difference between the release rates is then seen to be controlled by grain size. (author). 4 refs, 7 figs, 3 tabs.
Annealing of Al implanted 4H silicon carbide
International Nuclear Information System (INIS)
Hallen, A; Suchodolskis, A; Oesterman, J; Abtin, L; Linnarsson, M
2006-01-01
Al ions were implanted with multiple energies up to 250 keV at elevated temperatures in n-type 4H SiC epitaxial layers to reach a surface concentration of 1x10 20 cm -3 . These samples were then annealed at temperatures between 1500 and 1950 deg. C. A similar 4H SiC epitaxial sample was implanted by MeV Al ions to lower doses and annealed only at 200 and 400 deg. C. After annealing, cross-sections of the samples were characterized by scanning spreading resistance microscopy (SSRM). The results show that the resistivity of high-dose Al implanted samples has not reached a saturated value, even after annealing at the highest temperature. For the MeV Al implanted sample, the activation of Al has not yet started, but a substantial annealing of the implantation induced damage can be seen from the SSRM depth profiles
Use of superheated steam to anneal the reactor pressure vessel
International Nuclear Information System (INIS)
Porowski, J.S.
1994-01-01
Thermal annealing of an embrittled Reactor Pressure Shell is the only recognized means for recovering material properties lost due to long-term exposure of the reactor walls to radiation. Reduced toughness of the material during operation is a major concern in evaluations of structural integrity of older reactors. Extensive studies performed within programs related to life extension of nuclear plants have confirmed that the thermal treatment of 850 degrees F for 168 hours on irradiated material essentially recovers material properties lost due to neutron exposure. Dry and wet annealing methods have been considered. Wet annealing involves operating the reactor at near design temperatures and pressures. Since the temperature of wet annealing must be limited to vessel design temperature of 650 degrees F, only partial recovery of the lost properties is achieved. Thus dry annealing was selected as an alternative for future development and industrial implementation to extend the safe life of reactors
Crystallization degree change of expanded graphite by milling and annealing
International Nuclear Information System (INIS)
Tang Qunwei; Wu Jihuai; Sun Hui; Fang Shijun
2009-01-01
Expanded graphite was ball milled with a planetary mill in air atmosphere, and subsequently thermal annealed. The samples were characterized by using X-ray diffraction spectroscopy (XRD), scanning electron microscopy (SEM) and thermal gravimetric analysis (TGA). It was found that in the milling initial stage (less than 12 h), the crystallization degree of the expanded graphite declined gradually, but after milling more than 16 h, a recrystallization of the expanded graphite toke place, and ordered nanoscale expanded graphite was formed gradually. In the annealing initial stage, the non-crystallization of the graphite occurred, but, beyond an annealing time, recrystallizations of the graphite arise. Higher annealing temperature supported the recrystallization. The milled and annealed expanded graphite still preserved the crystalline structure as raw material and hold high thermal stability.
Directory of Open Access Journals (Sweden)
Felipe Baesler
2008-12-01
Full Text Available El presente artículo introduce una variante de la metaheurística simulated annealing, para la resolución de problemas de optimización multiobjetivo. Este enfoque se demonina MultiObjective Simulated Annealing with Random Trajectory Search, MOSARTS. Esta técnica agrega al algoritmo Simulated Annealing elementos de memoria de corto y largo plazo para realizar una búsqueda que permita balancear el esfuerzo entre todos los objetivos involucrados en el problema. Los resultados obtenidos se compararon con otras tres metodologías en un problema real de programación de máquinas paralelas, compuesto por 24 trabajos y 2 máquinas idénticas. Este problema corresponde a un caso de estudio real de la industria regional del aserrío. En los experimentos realizados, MOSARTS se comportó de mejor manera que el resto de la herramientas de comparación, encontrando mejores soluciones en términos de dominancia y dispersión.This paper introduces a variant of the metaheuristic simulated annealing, oriented to solve multiobjective optimization problems. This technique is called MultiObjective Simulated Annealing with Random Trajectory Search (MOSARTS. This technique incorporates short an long term memory concepts to Simulated Annealing in order to balance the search effort among all the objectives involved in the problem. The algorithm was tested against three different techniques on a real life parallel machine scheduling problem, composed of 24 jobs and two identical machines. This problem represents a real life case study of the local sawmill industry. The results showed that MOSARTS behaved much better than the other methods utilized, because found better solutions in terms of dominance and frontier dispersion.
Unified model of damage annealing in CMOS, from freeze-in to transient annealing
International Nuclear Information System (INIS)
Sander, H.H.; Gregory, B.L.
Results of an experimental study at 76 0 K, are presented showing that radiation-produced holes in SiO 2 are immobile at this temperature. If an electric field is present in the SiO 2 during low temperature (76 0 K) irradiation to sweep out the mobile electrons, the holes will virtually all be trapped where created and produce a uniform positive charge density in the oxide. These results are the basis for concluding that if a complimentary p,n metal-oxide semiconductor (CMOS) device is irradiated for sufficient time at 76 0 K to build-in an appreciable field, further irradiation with gate bias removed will produce very little additional change in V/sub th/, since the field in the oxide tends to keep all generated electrons in the oxide where they recombine with trapped holes. Hence the hole trapping rate = the hole annihilation rate. The room-temperature annealing following a pulsed gamma exposure occurs in two regimes. The first recovery of V/sub th/ occurs prior to 10 -4 seconds. The magnitude of this very early-time recovery, at room temperature, is oxide-dependent, and oxide process dependent. The rate-of-annealing is what is truly different between a rad-hard and a rad-soft device, since annealing in the hardest devices occurs very quickly at room temperature. (U.S.)
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves
DEFF Research Database (Denmark)
Eldeberky, Y.; Madsen, Per A.
1999-01-01
and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...
International Nuclear Information System (INIS)
Kutkov, V; Buglova, E; McKenna, T
2011-01-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.
Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Liang, Faming
2010-01-01
outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.
International Nuclear Information System (INIS)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-01-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis
2018-01-01
Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more
A Low Density Microarray Method for the Identification of Human Papillomavirus Type 18 Variants
Meza-Menchaca, Thuluz; Williams, John; Rodríguez-Estrada, Rocío B.; García-Bravo, Aracely; Ramos-Ligonio, Ángel; López-Monteon, Aracely; Zepeda, Rossana C.
2013-01-01
We describe a novel microarray based-method for the screening of oncogenic human papillomavirus 18 (HPV-18) molecular variants. Due to the fact that sequencing methodology may underestimate samples containing more than one variant we designed a specific and sensitive stacking DNA hybridization assay. This technology can be used to discriminate between three possible phylogenetic branches of HPV-18. Probes were attached covalently on glass slides and hybridized with single-stranded DNA targets. Prior to hybridization with the probes, the target strands were pre-annealed with the three auxiliary contiguous oligonucleotides flanking the target sequences. Screening HPV-18 positive cell lines and cervical samples were used to evaluate the performance of this HPV DNA microarray. Our results demonstrate that the HPV-18's variants hybridized specifically to probes, with no detection of unspecific signals. Specific probes successfully reveal detectable point mutations in these variants. The present DNA oligoarray system can be used as a reliable, sensitive and specific method for HPV-18 variant screening. Furthermore, this simple assay allows the use of inexpensive equipment, making it accessible in resource-poor settings. PMID:24077317
Mechanical properties and annealing texture of zirconium sheets
International Nuclear Information System (INIS)
Hanif-ur-Rehman; Khawaja, F.A.
1996-01-01
Mechanical properties like yield strength (YS), ultimate tensile strength(UTS), percentage elongation and annealing texture has been studied in sheets of commercially pure zirconium. The YS and UTS decrease as a function of annealing temperature up to 600 V, but both quantities have maximum value in sample annealed at 800 deg. C. The percentage elongation decreased with increase in annealing temperature up to 600 deg. C. A slight decrease and minimum value of percentage elongation was observed at 650 and 800 C respectively. The texture development in the annealed samples has been studied by the X-ray diffraction method. The sampled annealed at 800 deg. C showed a texture component (0001) [01 bar 10] with orientation density of about 8 times random, while the samples annealed at 600,650 and 700 deg. C showed a texture component (0001)[2 bar 110] with orientation density of about 5 times random. Thus it is concluded, that the texture component (0001)[2 bar 110] and (0001)[01 bar 10] at 650 and 800 geg. C respectively, may be the responsible for the increase in YS and UTS and decrease in percentage elongation at these temperatures. (author)
Optical scattering characteristic of annealed niobium oxide films
International Nuclear Information System (INIS)
Lai Fachun; Li Ming; Wang Haiqian; Hu Hailong; Wang Xiaoping; Hou, J.G.; Song Yizhou; Jiang Yousong
2005-01-01
Niobium oxide (Nb 2 O 5 ) films with thicknesses ranging from 200 to 1600 nm were deposited on fused silica at room temperature by low frequency reactive magnetron sputtering system. In order to study the optical losses resulting from the microstructures, the films with 500 nm thickness were annealed at temperatures between 600 and 1100 deg. C, and films with thicknesses from 200 to 1600 nm were annealed at 800 deg. C. Scanning electron microscopy and atomic force microscopy images show that the root mean square of surface roughness, the grain size, voids, microcracks, and grain boundaries increase with increasing both the annealing temperature and the thickness. Correspondingly, the optical transmittance and reflectance decrease, and the optical loss increases. The mechanisms of the optical losses are discussed. The results suggest that defects in the volume and the surface roughness should be the major source for the optical losses of the annealed films by causing pronounced scattering. For samples with a determined thickness, there is a critical annealing temperature, above which the surface scattering contributes to the major optical losses. In the experimental scope, for the films annealed at temperatures below 900 deg. C, the major optical losses resulted from volume scattering. However, surface roughness was the major source for the optical losses when the 500-nm films were annealed at temperatures above 900 deg. C
Extrapolation of zircon fission-track annealing models
International Nuclear Information System (INIS)
Palissari, R.; Guedes, S.; Curvo, E.A.C.; Moreira, P.A.F.P.; Tello, C.A.; Hadler, J.C.
2013-01-01
One of the purposes of this study is to give further constraints on the temperature range of the zircon partial annealing zone over a geological time scale using data from borehole zircon samples, which have experienced stable temperatures for ∼1 Ma. In this way, the extrapolation problem is explicitly addressed by fitting the zircon annealing models with geological timescale data. Several empirical model formulations have been proposed to perform these calibrations and have been compared in this work. The basic form proposed for annealing models is the Arrhenius-type model. There are other annealing models, that are based on the same general formulation. These empirical model equations have been preferred due to the great number of phenomena from track formation to chemical etching that are not well understood. However, there are two other models, which try to establish a direct correlation between their parameters and the related phenomena. To compare the response of the different annealing models, thermal indexes, such as closure temperature, total annealing temperature and the partial annealing zone, have been calculated and compared with field evidence. After comparing the different models, it was concluded that the fanning curvilinear models yield the best agreement between predicted index temperatures and field evidence. - Highlights: ► Geological data were used along with lab data for improving model extrapolation. ► Index temperatures were simulated for testing model extrapolation. ► Curvilinear Arrhenius models produced better geological temperature predictions
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
Annealed Scaling for a Charged Polymer
Energy Technology Data Exchange (ETDEWEB)
Caravenna, F., E-mail: francesco.caravenna@unimib.it [Università degli Studi di Milano-Bicocca, Dipartimento di Matematica e Applicazioni (Italy); Hollander, F. den, E-mail: denholla@math.leidenuniv.nl [Leiden University, Mathematical Institute (Netherlands); Pétrélis, N., E-mail: nicolas.petrelis@univ-nantes.fr [Université de Nantes, Laboratoire de Mathématiques Jean Leray UMR 6629 (France); Poisat, J., E-mail: poisat@ceremade.dauphine.fr [Université Paris-Dauphine, PSL Research University, CEREMADE, UMR 7534 (France)
2016-03-15
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems
Annealed Scaling for a Charged Polymer
International Nuclear Information System (INIS)
Caravenna, F.; Hollander, F. den; Pétrélis, N.; Poisat, J.
2016-01-01
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems
International Nuclear Information System (INIS)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
A continuous variable quantum deterministic key distribution based on two-mode squeezed states
International Nuclear Information System (INIS)
Gong, Li-Hua; Song, Han-Chong; Liu, Ye; Zhou, Nan-Run; He, Chao-Sheng
2014-01-01
The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency. (paper)
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
International Nuclear Information System (INIS)
Jepps, Owen G; Rondoni, Lamberto
2010-01-01
Deterministic 'thermostats' are mathematical tools used to model nonequilibrium steady states of fluids. The resulting dynamical systems correctly represent the transport properties of these fluids and are easily simulated on modern computers. More recently, the connection between such thermostats and entropy production has been exploited in the development of nonequilibrium fluid theories. The purpose and limitations of deterministic thermostats are discussed in the context of irreversible thermodynamics and the development of theories of nonequilibrium phenomena. We draw parallels between the development of such nonequilibrium theories and the development of notions of ergodicity in equilibrium theories. (topical review)
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Embrittlement recovery due to annealing of reactor pressure vessel steels
International Nuclear Information System (INIS)
Eason, E.D.; Wright, J.E.; Nelson, E.E.; Odette, G.R.; Mader, E.V.
1996-01-01
Embrittlement of reactor pressure vessels (RPVs) can be reduced by thermal annealing at temperatures higher than the normal operating conditions. Although such an annealing process has not been applied to any commercial plants in the United States, one US Army reactor, the BR3 plant in Belgium, and several plants in eastern Europe have been successfully annealed. All available Charpy annealing data were collected and analyzed in this project to develop quantitative models for estimating the recovery in 30 ft-lb (41 J) Charpy transition temperature and Charpy upper shelf energy over a range of potential annealing conditions. Pattern recognition, transformation analysis, residual studies, and the current understanding of the mechanisms involved in the annealing process were used to guide the selection of the most sensitive variables and correlating parameters and to determine the optimal functional forms for fitting the data. The resulting models were fitted by nonlinear least squares. The use of advanced tools, the larger data base now available, and insight from surrogate hardness data produced improved models for quantitative evaluation of the effects of annealing. The quality of models fitted in this project was evaluated by considering both the Charpy annealing data used for fitting and the surrogate hardness data base. The standard errors of the resulting recovery models relative to calibration data are comparable to the uncertainty in unirradiated Charpy data. This work also demonstrates that microhardness recovery is a good surrogate for transition temperature shift recovery and that there is a high level of consistency between the observed annealing trends and fundamental models of embrittlement and recovery processes
Annealing effect on restoration of irradiation steel properties
International Nuclear Information System (INIS)
Vishkarev, O.M.; Kolesova, T.N.; Myasnikova, K.P.; Pecherin, A.M.; Shamardin, V.K.
1986-01-01
The effect of temperature and annealing time on the restoration of properties of the 15Kh2NMFAA and 15Kh2MFA steels after irradiation at 285 deg with the fluence of 6x10 23 neutr/m 2 (E>0.5 MeV) is studied. Microhardness (H μ ) restoration in the irradiated 15Kh2NMFAA steel is shown to start from 350 deg C annealing temperature. The complete microhardness restoration is observed at the annealing temperature of 500 deg C for 10 hours
Structural study of conventional and bulk metallic glasses during annealing
International Nuclear Information System (INIS)
Pineda, E.; Hidalgo, I.; Bruna, P.; Pradell, T.; Labrador, A.; Crespo, D.
2009-01-01
Metallic glasses with conventional glass-forming ability (Al-Fe-Nd, Fe-Zr-B, Fe-B-Nb compositions) and bulk metallic glasses (Ca-Mg-Cu compositions) were studied by synchrotron X-ray diffraction during annealing throughout glass transition and crystallization temperatures. The analysis of the first diffraction peak position during the annealing process allowed us to follow the free volume change during relaxation and glass transition. The structure factor and the radial distribution function of the glasses were obtained from the X-ray measurements. The structural changes occurred during annealing are analyzed and discussed.
Composition dependent thermal annealing behaviour of ion tracks in apatite
Energy Technology Data Exchange (ETDEWEB)
Nadzri, A., E-mail: allina.nadzri@anu.edu.au [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Schauries, D.; Mota-Santiago, P.; Muradoglu, S. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia); Trautmann, C. [GSI Helmholtz Centre for Heavy Ion Research, Planckstrasse 1, 64291 Darmstadt (Germany); Technische Universität Darmstadt, 64287 Darmstadt (Germany); Gleadow, A.J.W. [School of Earth Science, University of Melbourne, Melbourne, VIC 3010 (Australia); Hawley, A. [Australian Synchrotron, 800 Blackburn Road, Clayton, VIC 3168 (Australia); Kluth, P. [Department of Electronic Materials Engineering, Research School of Physics and Engineering, Australian National University, Canberra, ACT 2601 (Australia)
2016-07-15
Natural apatite samples with different F/Cl content from a variety of geological locations (Durango, Mexico; Mud Tank, Australia; and Snarum, Norway) were irradiated with swift heavy ions to simulate fission tracks. The annealing kinetics of the resulting ion tracks was investigated using synchrotron-based small-angle X-ray scattering (SAXS) combined with ex situ annealing. The activation energies for track recrystallization were extracted and consistent with previous studies using track-etching, tracks in the chlorine-rich Snarum apatite are more resistant to annealing than in the other compositions.
Annealing-induced Fe oxide nanostructures on GaAs
Lu, Y X; Ahmad, E; Xu, Y B; Thompson, S M
2005-01-01
We report the evolution of Fe oxide nanostructures on GaAs(100) upon pre- and post-growth annealing conditions. GaAs nanoscale pyramids were formed on the GaAs surface due to wet etching and thermal annealing. An 8.0-nm epitaxial Fe film was grown, oxidized, and annealed using a gradient temperature method. During the process the nanostripes were formed, and the evolution has been demonstrated using transmission and reflection high energy electron diffraction, and scanning electron microscopy...
Annealing of the BR3 reactor pressure vessel
International Nuclear Information System (INIS)
Fabry, A.; Motte, F.; Stiennon, G.; Debrue, J.; Gubel, P.; Van de Velde, J.; Minsart, G.; Van Asbroeck, P.
1985-01-01
The pressure vessel of the Belgian BR-3 plant, a small (11 MWe) PWR presently used for fuel testing programs and operated since 1962, was annealed during March, 1984. The anneal was performed under wet conditions for 168 hours at 650 0 F with core removal and within plant design margins justification for the anneal, summary of plant characteristics, description of materials sampling, summary of reactor physics and dosimetry, development of embrittlement trend curves, hypothesized pressurized and overcooling thermal shock accidents, and conclusions are provided in detail
Radiation damage and annealing in plutonium tetrafluoride
McCoy, Kaylyn; Casella, Amanda; Sinkov, Sergey; Sweet, Lucas; McNamara, Bruce; Delegard, Calvin; Jevremovic, Tatjana
2017-12-01
A sample of plutonium tetrafluoride that was separated prior to 1966 at the Hanford Site in Washington State was analyzed at the Pacific Northwest National Laboratory (PNNL) in 2015 and 2016. The plutonium tetrafluoride, as received, was an unusual color and considering the age of the plutonium, there were questions about the condition of the material. These questions had to be answered in order to determine the suitability of the material for future use or long-term storage. Therefore, thermogravimetric/differential thermal analysis and X-ray diffraction evaluations were conducted to determine the plutonium's crystal structure, oxide content, and moisture content; these analyses reported that the plutonium was predominately amorphous and tetrafluoride, with an oxide content near ten percent. Freshly fluorinated plutonium tetrafluoride is known to be monoclinic. During the initial thermogravimetric/differential thermal analyses, it was discovered that an exothermic event occurred within the material near 414 °C. X-ray diffraction analyses were conducted on the annealed tetrafluoride. The X-ray diffraction analyses indicated that some degree of recrystallization occurred in conjunction with the 414 °C event. The following commentary describes the series of thermogravimetric/differential thermal and X-ray diffraction analyses that were conducted as part of this investigation at PNNL.
Radiation damage and annealing in plutonium tetrafluoride
Energy Technology Data Exchange (ETDEWEB)
McCoy, Kaylyn; Casella, Amanda; Sinkov, Sergey; Sweet, Lucas; McNamara, Bruce; Delegard, Calvin; Jevremovic, Tatjana
2017-12-01
Plutonium tetrafluoride that was separated prior to 1966 at the Hanford Site in Washington State was analyzed at the Pacific Northwest National Laboratory (PNNL) in 2015 and 2016. The plutonium tetrafluoride, as received, was an off-normal color and considering the age of the plutonium, there were questions about the condition of the material. These questions had to be answered in order to determine the suitability of the material for future use or long-term storage. Therefore, Thermogravimetric/Differential Thermal Analysis and X-ray Diffraction evaluations were conducted to determine the plutonium’s crystal structure, oxide content, and moisture content; these analyses reported that the plutonium was predominately amorphous and tetrafluoride, with an oxide content near ten percent. Freshly fluorinated plutonium tetrafluoride is known to be monoclinic. During the initial Thermogravimetric/Differential Thermal analyses, it was discovered that an exothermic event occurred within the material near 414°C. X-ray Diffraction analyses were conducted on the annealed tetrafluoride. The X-ray Diffraction analyses indicated that some degree of recrystallization occurred in conjunction with the 414°C event. The following commentary describes the series of Thermogravimetric/Differential Thermal and X-ray Diffraction analyses that were conducted as part of this investigation at PNNL, in collaboration with the University of Utah Nuclear Engineering Program.
Radiation damage and annealing in plutonium tetrafluoride
International Nuclear Information System (INIS)
McCoy, Kaylyn; Casella, Amanda; Sinkov, Sergey
2017-01-01
A sample of plutonium tetrafluoride that was separated prior to 1966 at the Hanford Site in Washington State was analyzed at the Pacific Northwest National Laboratory (PNNL) in 2015 and 2016. The plutonium tetrafluoride, as received, was an unusual color and considering the age of the plutonium, there were questions about the condition of the material. These questions had to be answered in order to determine the suitability of the material for future use or long-term storage. Therefore, thermogravimetric/differential thermal analysis and X-ray diffraction evaluations were conducted to determine the plutonium's crystal structure, oxide content, and moisture content; these analyses reported that the plutonium was predominately amorphous and tetrafluoride, with an oxide content near ten percent. Freshly fluorinated plutonium tetrafluoride is known to be monoclinic. And during the initial thermogravimetric/differential thermal analyses, it was discovered that an exothermic event occurred within the material near 414 °C. X-ray diffraction analyses were conducted on the annealed tetrafluoride. The X-ray diffraction analyses indicated that some degree of recrystallization occurred in conjunction with the 414 °C event. This commentary describes the series of thermogravimetric/differential thermal and X-ray diffraction analyses that were conducted as part of this investigation at PNNL.
GCPII Variants, Paralogs and Orthologs
Czech Academy of Sciences Publication Activity Database
Hlouchová, Klára; Navrátil, Václav; Tykvart, Jan; Šácha, Pavel; Konvalinka, Jan
2012-01-01
Roč. 19, č. 9 (2012), s. 1316-1322 ISSN 0929-8673 R&D Projects: GA ČR GAP304/12/0847 Institutional research plan: CEZ:AV0Z40550506 Keywords : PSMA * GCPIII * NAALADase L * splice variants * homologs * PSMAL Subject RIV: CE - Biochemistry Impact factor: 4.070, year: 2012
Odontogenic keratocyst: a peripheral variant.
Vij, H; Vij, R; Gupta, V; Sengupta, S
2011-01-01
Odontogenic keratocyst, which is developmental in nature, is an intraosseous lesion though on rare occasions it may occur in an extraosseous location. The extraosseous variant is referred to as peripheral odontogenic keratocyst. Though, clinically, peripheral odontogenic keratocyst resembles the gingival cyst of adults, it has histologic features that are pathognomonic of odontogenic keratocyst. This article presents a case of this uncommon entity.
Effects of annealing on evaporated SnS thin films
Energy Technology Data Exchange (ETDEWEB)
Sakrani, Samsudi; Ismail, Bakar [Universiti Teknologi Malaysia, Skudai, Johor Bahru (Malaysia). Dept. of Physics
1994-12-31
The effects of annealing of evaporated tin sulphide thin films (SnS) are described. The films were initially deposited onto glass substrate, followed by annealing in an encapsulated carbon block under the running argon gas at 310 degree Celsius. Short time annealing of the films results in a slight change of the compositions to a mix SnS/SnS sub 2 compound, and the tendency of increasing SnS sub 2 formation was observed on the films annealed for longer periods up to 20 hours. X-ray results showed the transformation of SnS peaks (040) and (080) to predominantly SnS sub 2 peaks - (001), (100), (101), and (110). The associated absorption coefficients measured on the films were found to be greater than 10 sup 5 cm sup -1, with indication of higher photon energy leading to the formation of SnS sub 2 compound.
Thermal annealing of an embrittled reactor pressure vessel
International Nuclear Information System (INIS)
Mager, T.R.; Dragunov, Y.G.; Leitz, C.
1998-01-01
As a result of the popularity of the Agencies report 'Neutron Irradiation Embrittlement of Reactor Pressure Vessel Steels' of 1975, it was decided that another report on this broad subject would be of use. In this report, background and contemporary views on specially identified areas of the subject are considered as self-contained chapters, written by experts. Chapter 11 deals with thermal annealing of an embrittled reactor pressure vessel. Anneal procedures for vessels from both the US and the former USSR are mentioned schematically, wet anneals at lower temperature and dry anneals above RPV design temperatures are investigated. It is shown that heat treatment is a means of recovering mechanical properties which were degraded by neutron radiation exposure, thus assuring reactor pressure vessel compliance with regulatory requirements
Effects of annealing on evaporated SnS thin films
International Nuclear Information System (INIS)
Samsudi Sakrani; Bakar Ismail
1994-01-01
The effects of annealing of evaporated tin sulphide thin films (SnS) are described. The films were initially deposited onto glass substrate, followed by annealing in an encapsulated carbon block under the running argon gas at 310 degree Celsius. Short time annealing of the films results in a slight change of the compositions to a mix SnS/SnS sub 2 compound, and the tendency of increasing SnS sub 2 formation was observed on the films annealed for longer periods up to 20 hours. X-ray results showed the transformation of SnS peaks (040) and (080) to predominantly SnS sub 2 peaks - (001), (100), (101), and (110). The associated absorption coefficients measured on the films were found to be greater than 10 sup 5 cm sup -1, with indication of higher photon energy leading to the formation of SnS sub 2 compound
Simulated annealing approach for solving economic load dispatch ...
African Journals Online (AJOL)
user
thermodynamics to solve economic load dispatch (ELD) problems. ... evolutionary programming algorithm has been successfully applied for solving the ... concept behind the simulated annealing (SA) optimization is discussed in Section 3.
Thermal annealing studies in muscovite and in quartz
International Nuclear Information System (INIS)
Roberts, J.H.; Gold, R.; Ruddy, F.H.
1979-06-01
In order to use Solid State Track Recorders (SSTR) in environments at elevated temperatures, it is necessary to know the thermal annealing characteristics of various types of SSTR. For applications in the nuclear energy program, the principal interest is focused upon the annealing of fission tracks in muscovite mica and in quartz. Data showing correlations between changes in track diameters and track densities as a function of annealing time and temperature will be presented for Amersil quartz glass. Similar data showing changes in track lengths and in track densities will be presented for mica. Time-temperature regions will be defined where muscovite mica can be accurately applied with negligible correction for thermal annealing
Production and beam annealing of damage in carbon implanted silicon
International Nuclear Information System (INIS)
Kool, W.H.; Roosendaal, H.E.; Wiggers, L.W.; Saris, F.W.
1978-01-01
The annealing of damage introduced by 70 keV C implantation of Si is studied for impact of H + and He + beams in the energy interval 30 to 200 keV. For a good description of the annealing behaviour it is necessary to account for the damage introduction which occurs simultaneously. It turns out that the initial damage annealing rate is proportional to the amount of damage. The proportionality constant is related to a quantity introduced in an earlier paper in order to describe saturation effects in the damage production after H + or He + impact in unimplanted Si. This indicates that the same mechanism governs both processes: beam induced damage annealing and saturation of the damage introduction. (author)
Solvent vapor annealing of an insoluble molecular semiconductor
Amassian, Aram
2010-01-01
Solvent vapor annealing has been proposed as a low-cost, highly versatile, and room-temperature alternative to thermal annealing of organic semiconductors and devices. In this article, we investigate the solvent vapor annealing process of a model insoluble molecular semiconductor thin film - pentacene on SiO 2 exposed to acetone vapor - using a combination of optical reflectance and two-dimensional grazing incidence X-ray diffraction measurements performed in situ, during processing. These measurements provide valuable and new insight into the solvent vapor annealing process; they demonstrate that solvent molecules interact mainly with the surface of the film to induce a solid-solid transition without noticeable swelling, dissolving or melting of the molecular material. © 2010 The Royal Society of Chemistry.
Valence control of cobalt oxide thin films by annealing atmosphere
International Nuclear Information System (INIS)
Wang Shijing; Zhang Boping; Zhao Cuihua; Li Songjie; Zhang Meixia; Yan Liping
2011-01-01
The cobalt oxide (CoO and Co 3 O 4 ) thin films were successfully prepared using a spin-coating technique by a chemical solution method with CH 3 OCH 2 CH 2 OH and Co(NO 3 ) 2 .6H 2 O as starting materials. The grayish cobalt oxide films had uniform crystalline grains with less than 50 nm in diameter. The phase structure is able to tailor by controlling the annealing atmosphere and temperature, in which Co 3 O 4 thin film was obtained by annealing in air at 300-600, and N 2 at 300, and transferred to CoO thin film by raising annealing temperature in N 2 . The fitted X-ray photoelectron spectroscopy (XPS) spectra of the Co2p electrons are distinguishable from different valence states of cobalt oxide especially for their satellite structure. The valence control of cobalt oxide thin films by annealing atmosphere contributes to the tailored optical absorption property.
Annealing properties of potato starches with different degrees of phosphorylation
DEFF Research Database (Denmark)
Muhrbeck, Per; Svensson, E
1996-01-01
Changes in the gelatinization temperature interval and gelatinization enthalpy with annealing time at 50 degrees C were followed for a number of potato starch samples, with different degrees of phosphorylation, using differential scanning calorimetry. The gelatinization temperature increased...
The theory of laser annealing of disordered semiconductors
International Nuclear Information System (INIS)
Noga, M.
1980-01-01
A theoretical explanation of the disorder-order phase transition concerning the ion implanted Si pulsed laser annealing is given. The phase transition is related to the Bose condensation of electron-hole plasmons. (author)
International Nuclear Information System (INIS)
Soriano Pena, A.; Lopez Arroyo, A.; Roesset, J.M.
1976-01-01
The probabilistic and deterministic approaches for calculating the seismic risk of nuclear power plants are both applied to a particular case in Southern Spain. The results obtained by both methods, when varying the input data, are presented and some conclusions drawn in relation to the applicability of the methods, their reliability and their sensitivity to change
Degli Esposti, M.; Giardinà, C.; Graffi, S.; Isola, S.
2001-01-01
We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e.,
DEFF Research Database (Denmark)
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
On competition in a Stackelberg location-design model with deterministic supplier choice
Hendrix, E.M.T.
2016-01-01
We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is
DEFF Research Database (Denmark)
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-01-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
2D deterministic radiation transport with the discontinuous finite element method
International Nuclear Information System (INIS)
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
On the effect of deterministic terms on the bias in stable AR models
van Giersbergen, N.P.A.
2004-01-01
This paper compares the first-order bias approximation for the autoregressive (AR) coefficients in stable AR models in the presence of deterministic terms. It is shown that the bias due to inclusion of an intercept and trend is twice as large as the bias due to an intercept. For the AR(1) model, the
Czech Academy of Sciences Publication Activity Database
Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.
2017-01-01
Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
Using the deterministic factor systems in the analysis of return on ...
African Journals Online (AJOL)
Using the deterministic factor systems in the analysis of return on equity. ... or equal the profitability of bank deposits, the business of the organization is not efficient. ... Application of quantitative and qualitative indicators in the analysis allows to ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.
Deterministic linear-optics quantum computing based on a hybrid approach
International Nuclear Information System (INIS)
Lee, Seung-Woo; Jeong, Hyunseok
2014-01-01
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources
Deterministic linear-optics quantum computing based on a hybrid approach
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
A new recursive incremental algorithm for building minimal acyclic deterministic finite automata
Watson, B.W.; Martin-Vide, C.; Mitrana, V.
2003-01-01
This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with
Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities
Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.
2005-01-01
Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,
Pfaff, W.; Vos, A.; Hanson, R.
2013-01-01
Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures
From Ordinary Differential Equations to Structural Causal Models: the deterministic case
Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.
2013-01-01
We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of
Reduction of thermal quenching of biotite mineral due to annealing
International Nuclear Information System (INIS)
Kalita, J.M.; Wary, G.
2014-01-01
Graphical abstract: - Highlights: • Thermoluminescence of X-ray irradiate biotite was studied at various heating rates. • Thermal quenching was found to decrease with increase in annealing temperature. • Due to annealing one trap level was vanished and a new shallow trap level generated. • The new trap level contributes low thermally quenched thermoluminescence signal. - Abstract: Thermoluminescence (TL) of X-ray irradiated natural biotite annealed at 473, 573, 673 and 773 K were studied within 290–480 K at various linear heating rates (2, 4, 6, 8 and 10 K/s). A Computerized Glow Curve Deconvolution technique was used to study various TL parameters. Thermal quenching was found to be very high for un-annealed sample, however it decreased significantly with increase in annealing temperature. For un-annealed sample thermal quenching activation energy (W) and pre-exponential frequency factor (C) were found to be W = (2.71 ± 0.05) eV and C = (2.38 ± 0.05) × 10 12 s −1 respectively. However for 773 K annealed sample, these parameters were found to be W = (0.63 ± 0.03) eV, C = (1.75 ± 0.27) × 10 14 s −1 . Due to annealing, the initially present trap level at depth 1.04 eV was vanished and a new shallow trap state was generated at depth of 0.78 eV which contributes very low thermally quenched TL signal
Annealing of chemical radiation damage in zirconium nitrate
International Nuclear Information System (INIS)
Mahamood, Aysha; Chandunni, E.; Nair, S.M.K.
1979-01-01
A kinetic study of the annealing of γ-irradiation damage in zirconium nitrate is presented. The annealing can be represented as a combination of a first order and a second order process. It is considered that the first order process is the combination of close correlated pairs of Osup(-) and NO fragments and the second order process involves the single reaction of random recombination of the fragments throughout the crystal. (auth.)
Far-infrared spectroscopy of thermally annealed tungsten silicide films
International Nuclear Information System (INIS)
Amiotti, M.; Borghesi, A.; Guizzetti, G.; Nava, F.; Santoro, G.
1991-01-01
The far-infrared transmittance spectrum of tungsten silicide has been observed for the first time. WSi 2 polycrystalline films were prepared by coevaporation and chemical-vapour deposition on silicon wafers, and subsequently thermally annealed at different temperatures. The observed structures are interpreted, on the basis of the symmetry properties of the crystal, such as infrared-active vibrational modes. Moreover, the marked lineshape dependence on annealing temperature enables this technique to analyse the formation of the solid silicide phases
Thermal annealing of amorphous Ti-Si-O thin films
Hodroj , Abbas; Chaix-Pluchery , Odette; Audier , Marc; Gottlieb , Ulrich; Deschanvres , Jean-Luc
2008-01-01
International audience; Ti-Si-O thin films were deposited using an aerosol chemical vapor deposition process at atmospheric pressure. The film structure and microstructure were analysed using several techniques before and after thermal annealing. Diffraction results indicate that the films remain X-ray amorphous after annealing whereas Fourier transform infrared spectroscopy gives evidence of a phase segregation between amorphous SiO2 and well crystallized anatase TiO2. Crystallization of ana...
Absence of redox processes in the annealing of permanganates
International Nuclear Information System (INIS)
Dedgaonkar, V.G.; Mitra, S.; Kulkarni, S.A.
1982-01-01
Initial retentions upon neutron activation of alkaline earth and Ni, Cu, Zn and Cd permanganates were in the range 8-17 per cent. Isostructural hexahydrates of Mg, Zn and Cd permanganates showed identifical values of approximately 8 per cent. Radiation annealing was negligible and the extent of thermal annealing was hardly 2-5 per cent in all the salts. Probable mechanisms are discussed. (author)
Simulated annealing image reconstruction for positron emission tomography
Energy Technology Data Exchange (ETDEWEB)
Sundermann, E; Lemahieu, I; Desmedt, P [Department of Electronics and Information Systems, University of Ghent, St. Pietersnieuwstraat 41, B-9000 Ghent, Belgium (Belgium)
1994-12-31
In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors). 11 refs., 2 figs.
Simulated annealing image reconstruction for positron emission tomography
International Nuclear Information System (INIS)
Sundermann, E.; Lemahieu, I.; Desmedt, P.
1994-01-01
In Positron Emission Tomography (PET) images have to be reconstructed from moisy projection data. The noise on the PET data can be modeled by a Poison distribution. In this paper, we present the results of using the simulated annealing technique to reconstruct PET images. Various parameter settings of the simulated annealing algorithm are discussed and optimized. The reconstructed images are of good quality and high contrast, in comparison to other reconstruction techniques. (authors)
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Population annealing: Theory and application in spin glasses
Wang, Wenlong; Machta, Jonathan; Katzgraber, Helmut G.
2015-01-01
Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar ...
Crystallinity and mechanical effects from annealing Parylene thin films
Energy Technology Data Exchange (ETDEWEB)
Jackson, Nathan, E-mail: Nathan.Jackson@tyndall.ie [Tyndall National Institute, University College Cork, Cork (Ireland); Stam, Frank; O' Brien, Joe [Tyndall National Institute, University College Cork, Cork (Ireland); Kailas, Lekshmi [University of Limerick, Limerick (Ireland); Mathewson, Alan; O' Murchu, Cian [Tyndall National Institute, University College Cork, Cork (Ireland)
2016-03-31
Parylene is commonly used as thin film polymer for MEMS devices and smart materials. This paper investigates the impact on bulk properties due to annealing various types of Parylene films. A thin film of Parylene N, C and a hybrid material consisting of Parylene N and C were deposited using a standard Gorham process. The thin film samples were annealed at varying temperatures from room temperature up to 300 °C. The films were analyzed to determine the mechanical and crystallinity effects due to different annealing temperatures. The results demonstrate that the percentage of crystallinity and the full-width-half-maximum value on the 2θ X-ray diffraction scan increases as the annealing temperature increases until the melting temperature of the Parylene films was achieved. Highly crystalline films of 85% and 92% crystallinity were achieved for Parylene C and N respectively. Investigation of the hybrid film showed that the individual Parylene films behave independently to each other, and the crystallinity of one film had no significant impact to the other film. Mechanical testing showed that the elastic modulus and yield strength increase as a function of annealing, whereas the elongation-to-break parameter decreases. The change in elastic modulus was more significant for Parylene C than Parylene N and this is attributed to the larger change in crystallinity that was observed. Parylene C had a 112% increase in crystallinity compared to a 61% increase for Parylene N, because the original Parylene N material was more crystalline than Parylene C so the change of crystallinity was greater for Parylene C. - Highlights: • A hybrid material consisting of Parylene N and C was developed. • Parylene N has greater crystallinity than Parylene C. • Phase transition of Parylene N due to annealing results in increased crystallinity. • Annealing caused increased crystallinity and elastic modulus in Parylene films. • Annealed hybrid Parylene films crystallinity behave
Processes in N-channel MOSFETs during postirradiation thermal annealing
International Nuclear Information System (INIS)
Pejovic, M.; Jaksic, A.; Ristic, G.; Baljosevic, B.
1997-01-01
The processes during postirradiation thermal annealing of γ-ray irradiated n-channel MOSFETs with both wet and dry gate oxides are investigated. For both analysed technologies, a so-called ''latent'' interface trap buildup is observed, followed at very late annealing times by the decrease in the interface-trap density. A model is proposed that successfully accounts for the experimental results. Implications of observed effects for total dose hardness assurance test methods implementation are discussed. (author)
Significant improvement in the thermal annealing process of optical resonators
Salzenstein, Patrice; Zarubin, Mikhail
2017-05-01
Thermal annealing performed during process improves the quality of the roughness of optical resonators reducing stresses at the periphery of their surface thus allowing higher Q-factors. After a preliminary realization, the design of the oven and the electronic method were significantly improved thanks to nichrome resistant alloy wires and chopped basalt fibers for thermal isolation during the annealing process. Q-factors can then be improved.
Swine Influenza/Variant Influenza Viruses
... Address What's this? Submit What's this? Submit Button Influenza Types Seasonal Avian Swine Variant Pandemic Other Information on Swine Influenza/Variant Influenza Virus Language: English (US) Español Recommend ...
Sun, Binhan; Fazeli, Fateh; Scott, Colin; Yue, Stephen
2016-10-01
Medium manganese steels alloyed with sufficient aluminum and silicon amounts contain high fractions of retained austenite adjustable to various transformation-induced plasticity/twinning-induced plasticity effects, in addition to a reduced density suitable for lightweight vehicle body-in-white assemblies. Two hot rolled medium manganese steels containing 3 wt pct aluminum and 3 wt pct silicon were subjected to different annealing treatments in the present study. The evolution of the microstructure in terms of austenite transformation upon reheating and the subsequent austenite decomposition during quenching was investigated. Manganese content of the steels prevailed the microstructural response. The microstructure of the leaner alloy with 7 wt pct Mn (7Mn) was substantially influenced by the annealing temperature, including the variation of phase constituents, the morphology and composition of intercritical austenite, the Ms temperature and the retained austenite fraction. In contrast, the richer variant 10 wt pct Mn steel (10Mn) exhibited a substantially stable ferrite-austenite duplex phase microstructure containing a fixed amount of retained austenite which was found to be independent of the variations of intercritical annealing temperature. Austenite formation from hot band ferrite-pearlite/bainite mixtures was very rapid during annealing at 1273 K (1000 °C), regardless of Mn contents. Austenite growth was believed to be controlled at early stages by carbon diffusion following pearlite/bainite dissolution. The redistribution of Mn in ferrite and particularly in austenite at later stages was too subtle to result in a measureable change in austenite fraction. Further, the hot band microstructure of both steels contained a large fraction of coarse-grained δ-ferrite, which remained almost unchanged during intercritical annealing. A recently developed thermodynamic database was evaluated using the experimental data. The new database achieved a better agreement
Conventional treatment planning optimization using simulated annealing
International Nuclear Information System (INIS)
Morrill, S.M.; Langer, M.; Lane, R.G.
1995-01-01
Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment
Structural evolution of tunneling oxide passivating contact upon thermal annealing.
Choi, Sungjin; Min, Kwan Hong; Jeong, Myeong Sang; Lee, Jeong In; Kang, Min Gu; Song, Hee-Eun; Kang, Yoonmook; Lee, Hae-Seok; Kim, Donghwan; Kim, Ka-Hyun
2017-10-16
We report on the structural evolution of tunneling oxide passivating contact (TOPCon) for high efficient solar cells upon thermal annealing. The evolution of doped hydrogenated amorphous silicon (a-Si:H) into polycrystalline-silicon (poly-Si) by thermal annealing was accompanied with significant structural changes. Annealing at 600 °C for one minute introduced an increase in the implied open circuit voltage (V oc ) due to the hydrogen motion, but the implied V oc decreased again at 600 °C for five minutes. At annealing temperature above 800 °C, a-Si:H crystallized and formed poly-Si and thickness of tunneling oxide slightly decreased. The thickness of the interface tunneling oxide gradually decreased and the pinholes are formed through the tunneling oxide at a higher annealing temperature up to 1000 °C, which introduced the deteriorated carrier selectivity of the TOPCon structure. Our results indicate a correlation between the structural evolution of the TOPCon passivating contact and its passivation property at different stages of structural transition from the a-Si:H to the poly-Si as well as changes in the thickness profile of the tunneling oxide upon thermal annealing. Our result suggests that there is an optimum thickness of the tunneling oxide for passivating electron contact, in a range between 1.2 to 1.5 nm.
In-place thermal annealing of nuclear reactor pressure vessels
International Nuclear Information System (INIS)
Server, W.L.
1985-04-01
Radiation embrittlement of ferritic pressure vessel steels increases the ductile-brittle transition temperature and decreases the upper shelf level of toughness as measured by Charpy impact tests. A thermal anneal cycle well above the normal operating temperature of the vessel can restore most of the original Charpy V-notch energy properties. The Amry SM-1A test reactor vessel was wet annealed in 1967 at less than 343 0 C (650 0 F), and wet annealing of the Belgian BR-3 reactor vessel at 343 0 C (650 0 F) has recently taken place. An industry survey indicates that dry annealing a reactor vessel in-place at temperatures as high as 454 0 C (850 0 F) is feasible, but solvable engineering problems do exist. Economic considerations have not been totally evaluated in assessing the cost-effectiveness of in-place annealing of commercial nuclear vessels. An American Society for Testing and Materials (ASTM) task group is upgrading and revising guide ASTM E 509-74 with emphasis on the materials and surveillance aspects of annealing rather than system engineering problems. System safety issues are the province of organizations other than ASTM (e.g., the American Society of Mechanical Engineers Boiler and Pressure Vessel Code body)
Enhanced dielectric and electrical properties of annealed PVDF thin film
Arshad, A. N.; Rozana, M. D.; Wahid, M. H. M.; Mahmood, M. K. A.; Sarip, M. N.; Habibah, Z.; Rusop, M.
2018-05-01
Poly (vinylideneflouride) (PVDF) thin films were annealed at various annealing temperatures ranging from 70°C to 170°C. This study demonstrates that PVDF thin films annealed at temperature of 70°C (AN70) showed significant enhancement in their dielectric constant (14) at frequency of 1 kHz in comparison to un-annealed PVDF (UN-PVDF), dielectric constant (10) at the same measured frequency. As the annealing temperature was increased from 90°C (AN90) to 150°C (AN150), the dielectric constant value of PVDF thin films was observed to decrease gradually to 11. AN70 also revealed low tangent loss (tan δ) value at similar frequency. With respect to its resistivity properties, the values were found to increase from 1.98×104 Ω.cm to 3.24×104 Ω.cm for AN70 and UN-PVDF films respectively. The improved in dielectric constant, with low tangent loss and high resistivity value suggests that 70°C is the favorable annealing temperature for PVDF thin films. Hence, AN70 is a promising film to be utilized for application in electronic devices such as low frequency capacitor.
Annealing effects in low upper-shelf welds (series 9)
International Nuclear Information System (INIS)
Iskander, S.K.; Nanstad, R.K.
1995-01-01
The purpose of the Ninth Irradiation Series is to evaluate the correlation between fracture toughness and CVN impact energy during irradiation, annealing, and reirradiation (IAR). Results of annealing CVN specimens from the low-USE welds from the Midland beltline and nozzle course welds, as well as HSST plate 02 and HSSI weld 73W are given. Also presented is the effect of annealing on the initiation fracture toughness of annealed material from Midland beltline weld and HSST plate 02. The results from capsule 10-5 specimens of weld 73W confirm those previously obtained on the so-called undersize specimens that were irradiated in the Fifth Irradiation Series, namely that the recovery due to annealing at 343 degrees C (650 degrees F) for 1 week is insignificant. The fabrication of major components for the IAR facility for two positions on the east side of the FNR at the University of Michigan has begun. Fabrication of two reusable capsules (one for temperature verification and the other for dosimetry verification), as well as two capsules for IAR, studies is also under way. The design of a reusable capsule capable of reirradiating previously irradiated and annealed CVN and 1T C(T) specimens is also progressing. The data acquisition and control (DAC) instrumentation for the first two IAR facilities is essentially complete and awaiting completion of the IAR facilities and temperature test capsule for checkout and control algorithm development
Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha
2015-01-01
To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Directory of Open Access Journals (Sweden)
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
Directory of Open Access Journals (Sweden)
Feng HE
2017-12-01
Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15â20%, which just coincides with the statistical data (18â22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks
Formation of oxygen related donors in step-annealed CZ–silicon
Indian Academy of Sciences (India)
The effect of step-annealing necessitated by the difficulties being faced in the long duration annealing treatments to be given to CZ–silicon has been studied. One pre-anneal of 10 h followed by annealing of 10 h causes a decrease in the absorption coefficient for carbon (c). Oxygen and carbon both accelerate thermal ...
Study of annealing effects in Al–Sb bilayer thin films
Indian Academy of Sciences (India)
There are three methods to prepare compound semiconductor systems: bilayer annealing (Singh and Vijay 2004a), rapid thermal annealing (Singh and Vijay 2004b) and ion beam mixing (Dhar et al 2003). The annealing and ion beam mixing were found to show inferior mixing effects compared to rapid thermal annealing.
Comparison of pulsed electron beam-annealed and pulsed ruby laser-annealed ion-implanted silicon
International Nuclear Information System (INIS)
Wilson, S.R.; Appleton, B.R.; White, C.W.; Narayan, J.; Greenwald, A.C.
1978-11-01
Recently two new techniques, pulsed electron beam annealing and pulsed laser annealing, have been developed for processing ion-implanted silicon. These two types of anneals have been compared using ion-channeling, ion back-scattering, and transmission electron microscopy (TEM). Single crystal samples were implanted with 100 keV As + ions to a dose of approx. 1 x 10 16 ions/cm 2 and subsequently annealed by either a pulsed Ruby laser or a pulsed electron beam. Our results show in both cases that the near-surface region has melted and regrown epitaxially with nearly all of the implanted As (97 to 99%) incroporated onto lattice sites. The analysis indicates that the samples are essentially defect free and have complete electrical recovery
International Nuclear Information System (INIS)
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Energy Technology Data Exchange (ETDEWEB)
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Coronary artery anatomy and variants
Energy Technology Data Exchange (ETDEWEB)
Malago, Roberto; Pezzato, Andrea; Barbiani, Camilla; Alfonsi, Ugolino; Nicoli, Lisa; Caliari, Giuliana; Pozzi Mucelli, Roberto [Policlinico G.B. Rossi, University of Verona, Department of Radiology, Verona (Italy)
2011-12-15
Variants and congenital anomalies of the coronary arteries are usually asymptomatic, but may present with severe chest pain or cardiac arrest. The introduction of multidetector CT coronary angiography (MDCT-CA) allows the detection of significant coronary artery stenosis. Improved performance with isotropic spatial resolution and higher temporal resolution provides a valid alternative to conventional coronary angiography (CCA) in many patients. MDCT-CA is now considered the ideal tool for three-dimensional visualization of the complex and tortuous anatomy of the coronary arteries. With multiplanar and volume-rendered reconstructions, MDCT-CA may even outperform CCA in determining the relative position of vessels, thus providing a better view of the coronary vascular anatomy. The purpose of this review is to describe the normal anatomy of the coronary arteries and their main variants based on MDCT-CA with appropriate reconstructions. (orig.)
Microcystic Variant of Urothelial Carcinoma
Directory of Open Access Journals (Sweden)
Anthony Kodzo-Grey Venyo
2013-01-01
Full Text Available Background. Microcystic variant of urothelial carcinoma is one of the new variants of urothelial carcinoma that was added to the WHO classification in 2004. Aims. To review the literature on microcystic variant of urothelial carcinoma. Methods. Various internet search engines were used to identify reported cases of the tumour. Results. Microscopic features of the tumour include: (i Conspicuous intracellular and intercellular lumina/microcysts encompassed by malignant urothelial or squamous cells. (ii The lumina are usually empty; may contain granular eosinophilic debris, mucin, or necrotic cells. (iii The cysts may be variable in size; round, or oval, up to 2 mm; lined by urothelium which are either flattened cells or low columnar cells however, they do not contain colonic epithelium or goblet cells; are infiltrative; invade the muscularis propria; mimic cystitis cystica and cystitis glandularis; occasionally exhibit neuroendocrine differentiation. (iv Elongated and irregular branching spaces are usually seen. About 17 cases of the tumour have been reported with only 2 patients who have survived. The tumour tends to be of high-grade and high-stage. There is no consensus opinion on the best option of treatment of the tumour. Conclusions. It would prove difficult at the moment to be dogmatic regarding its prognosis but it is a highly aggressive tumour. New cases of the tumour should be reported in order to document its biological behaviour.
International Nuclear Information System (INIS)
Orssaud, J.
1958-06-01
Rolling and annealing textures of KROLL zirconium samples at several rolling rates were studied by pole figures with an automatic recorder versus the position in the sheet thickness. Tensile tests, hardness measurements and micrographic examinations allowed to study the evolution of the recrystallization and the variation of the mechanical properties after rolling and/or annealing. Annealing textures slightly varies with the annealing temperature. Annealing at 500 deg. C gives several peculiarities. This temperature seems characteristic in the study of zirconium. (author) [fr
Liquid nitrogen enhancement of partially annealed fission tracks in glass
International Nuclear Information System (INIS)
Pilione, L.J.; Gold, D.P.
1976-01-01
It is known that the number density of fission tracks in solids is reduced if the sample is heated before chemical etching, and the effect of annealing must be allowed for before an age can be assigned to the sample. The extent of annealing can be determined by measuring the reduction of track parameters (diameter and/or length) and comparison with unannealed tracks. Correct ages can be obtained by careful calibration studies of track density reduction against track diameter or length reduction at different annealing temperatures and times. For crystallised minerals, however, the resulting correction techniques are not generally valid. In the experimental work described glass samples were partially annealed and then immersed in liquid N 2 for various periods, and it was shown that the properties of the glass and the track parameters could be altered so as to observe tracks that would normally be erased by annealing. The results of track density measurements against liquid N 2 immersion times are shown graphically. A gain of about 40% was achieved after 760 hours immersion time. The size of the tracks was not noticeably affected by the immersion. It was thought that thermal shock might be the cause of the track enhancement, but it was found that repeated immersion for about 2 hours did not lead to an increase in track density. Other studies suggest that the mechanism that erases the tracks through annealing may be partially reversed when the temperature of the sample is significantly lowered for a sufficient length of time. Further work is under way to find whether or not the process of enhancement is a reversal of the annealing process. Similar enhancement effects using liquid N 2 have been observed for d-particle tracks in polycarbonate detectors. (U.K.)
International Nuclear Information System (INIS)
Jeng, Jiann-Shing
2012-01-01
SnO 2 films with and without Sb doping were prepared by the sol-gel spin-coating method. Material properties of the SnO 2 films with different Sb contents were investigated before and after annealing under O 2 or N 2 . When SnO 2 films are annealed under N 2 or O 2 , the resistivity decreases with increasing annealing temperature, which may be related to the increased crystallinity and reduced film defects. The intensity of SnO 2 peaks for both O 2 - and N 2 -annealed films increases as the annealing temperature increases. Small nodules are revealed on the surface of SnO 2 films after annealing in N 2 or O 2 atmospheres, and some voids are present on the surface of N 2 -annealed SnO 2 films. After doping with Sb, the resistivity of SnO 2 films after annealing in O 2 is greater than that of N 2 -annealed SnO 2 films. The surface morphology of SnO 2 films incorporating different molar ratios of Sb after annealing are similar to that of as-spun SnO 2 films with adding Sb. There were no voids found on the surfaces of N 2 -annealed SnO 2 :Sb films. In addition, the peak intensity of SnO 2 :Sb films after O 2 -annealing is higher than those films after N 2 -annealing. The chemical binding states and Hall mobility of the high-temperature annealed SnO 2 films without and with adding Sb are also related to the annealing atmospheres. This study discusses the connection among the material properties of the SnO 2 films with different Sb contents and how these properties are influenced by the Sb-doping concentration and the annealing atmospheres of SnO 2 films.
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
International Nuclear Information System (INIS)
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
Characterization of form variants of Xenorhabdus luminescens.
Gerritsen, L J; de Raay, G; Smits, P H
1992-01-01
From Xenorhabdus luminescens XE-87.3 four variants were isolated. One, which produced a red pigment and antibiotics, was luminescent, and could take up dye from culture media, was considered the primary form (XE-red). A pink-pigmented variant (XE-pink) differed from the primary form only in pigmentation and uptake of dye. Of the two other variants, one produced a yellow pigment and fewer antibiotics (XE-yellow), while the other did not produce a pigment or antibiotics (XE-white). Both were less luminescent, did not take up dye, and had small cell and colony sizes. These two variants were very unstable and shifted to the primary form after 3 to 5 days. It was not possible to separate the primary form and the white variant completely; subcultures of one colony always contained a few colonies of the other variant. The white variant was also found in several other X. luminescens strains. DNA fingerprints showed that all four variants are genetically identical and are therefore derivatives of the same parent. Protein patterns revealed a few differences among the four variants. None of the variants could be considered the secondary form. The pathogenicity of the variants decreased in the following order: XE-red, XE-pink, XE-yellow, and XE-white. The mechanism and function of this variability are discussed. Images PMID:1622273
Energy Technology Data Exchange (ETDEWEB)
Marchand, E
2007-12-15
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Wu, Lucia R.; Chen, Sherry X.; Wu, Yalei; Patel, Abhijit A.; Zhang, David Yu
2018-01-01
Rare DNA-sequence variants hold important clinical and biological information, but existing detection techniques are expensive, complex, allele-specific, or don’t allow for significant multiplexing. Here, we report a temperature-robust polymerase-chain-reaction method, which we term blocker displacement amplification (BDA), that selectively amplifies all sequence variants, including single-nucleotide variants (SNVs), within a roughly 20-nucleotide window by 1,000-fold over wild-type sequences. This allows for easy detection and quantitation of hundreds of potential variants originally at ≤0.1% in allele frequency. BDA is compatible with inexpensive thermocycler instrumentation and employs a rationally designed competitive hybridization reaction to achieve comparable enrichment performance across annealing temperatures ranging from 56 °C to 64 °C. To show the sequence generality of BDA, we demonstrate enrichment of 156 SNVs and the reliable detection of single-digit copies. We also show that the BDA detection of rare driver mutations in cell-free DNA samples extracted from the blood plasma of lung-cancer patients is highly consistent with deep sequencing using molecular lineage tags, with a receiver operator characteristic accuracy of 95%. PMID:29805844
CSIR Research Space (South Africa)
Burger, CR
2011-11-01
Full Text Available Current certification criteria for safety-critical systems exclude non-deterministic control systems. This paper investigates the feasibility of using human-like monitoring strategies to achieve safe non-deterministic control using multiple...
Vessel annealing. Will it become a routine procedure?
International Nuclear Information System (INIS)
Davies, M.
1995-01-01
The effect of neutron radiation on the reactor pressure vessel and the influence of annealing performed to eliminate this effect are explained. Some practical examples are given. A simple heat treatment at 450 degC for 168 h is sufficient to eliminate a major fraction of the radiation effect in the displacement of the transition temperature from the brittle state to the tough state. Some observations indicate that at this temperature, excessive energy recovery takes place at the upper toughness limit in the Charpy diagram. The annealing furnace manufactured by the SKODA company is described. The furnace consists of heating elements in 13 zones and 5 heating sections. The maximum power of each element is 75 kW, the total power of the furnace is 975 kW. The annealing procedure and its results are briefly outlined for the reactor pressure vessel at unit 2 of the Jaslovske Bohunice NPP. Reactor pressure vessel annealing is proposed for the Marble Hill NPP which has been shut down. Preparatory activities for annealing are also under way at the Loviisa NPP. (J.B.)
Kinetics of annealing of irradiated surveillance pressure vessel steel
International Nuclear Information System (INIS)
Harvey, D.J.; Wechsler, M.S.
1982-01-01
Indentation hardness measurements as a function of annealing were made on broken halves of Charpy impact surveillance samples. The samples had been irradiated in commercial power reactors to a neutron fluence of approximately 1 x 10 18 neutrons per cm 2 , E > 1 MeV, at a temperature of about 300 0 C (570 0 F). Results are reported for the weld metal, which showed greater radiation hardening than the base plate or heat-affected zone material. Isochronal and isothermal anneals were conducted on the irradiated surveillance samples and on unirradiated control samples. No hardness changes upon annealing occurred for the control samples. The recovery in hardness for the irradiated samples took place mostly between 400 and 500 0 C. Based on the Meechan-Brinkman method of analysis, the activation energy for annealing was found to be 0.60 +- 0.06 eV. According to computer simulation calculations of Beeler, the activation energy for migration of vacancies in alpha iron is about 0.67 eV. Therefore, the results of this preliminary study appear to be consistent with a mechanism of annealing of radiation damage in pressure vessel steels based on the migration of radiation-produced lattice vacancies
High pressure annealing of Europium implanted GaN
Lorenz, K.; Miranda, S. M. C.; Alves, E.; Roqan, Iman S.; O'Donnell, K. P.; Bokowski, M.
2012-01-01
GaN epilayers were implanted with Eu to fluences of 1×10^13 Eu/cm2 and 1×10^15 Eu/cm2. Post-implant thermal annealing was performed in ultra-high nitrogen pressures at temperatures up to 1450 ºC. For the lower fluence effective structural recovery of the crystal was observed for annealing at 1000 ºC while optical activation could be further improved at higher annealing temperatures. The higher fluence samples also reveal good optical activation; however, some residual implantation damage remains even for annealing at 1450 ºC which leads to a reduced incorporation of Eu on substitutional sites, a broadening of the Eu luminescence lines and to a strongly reduced fraction of optically active Eu ions. Possibilities for further optimization of implantation and annealing conditions are discussed.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.