WorldWideScience

Sample records for underlying deterministic structure

  1. From Ordinary Differential Equations to Structural Causal Models: the deterministic case

    NARCIS (Netherlands)

    Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.

    2013-01-01

    We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of

  2. A Theory of Deterministic Event Structures

    NARCIS (Netherlands)

    Lee, I.; Rensink, Arend; Smolka, S.A.

    1995-01-01

    We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a

  3. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    Science.gov (United States)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  4. Line and lattice networks under deterministic interference models

    NARCIS (Netherlands)

    Goseling, Jasper; Gastpar, Michael; Weber, Jos H.

    Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of

  5. Optimization of structures subjected to dynamic load: deterministic and probabilistic methods

    Directory of Open Access Journals (Sweden)

    Élcio Cassimiro Alves

    Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.

  6. Nonlinear deterministic structures and the randomness of protein sequences

    CERN Document Server

    Huang Yan Zhao

    2003-01-01

    To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.

  7. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    International Nuclear Information System (INIS)

    Elkhoraibi, T.; Hashemi, A.; Ostadan, F.

    2014-01-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  8. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  9. Inferring hierarchical clustering structures by deterministic annealing

    International Nuclear Information System (INIS)

    Hofmann, T.; Buhmann, J.M.

    1996-01-01

    The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees

  10. Deterministic behavioural models for concurrency

    DEFF Research Database (Denmark)

    Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn

    1993-01-01

    This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....

  11. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  12. Understanding deterministic diffusion by correlated random walks

    International Nuclear Information System (INIS)

    Klages, R.; Korabel, N.

    2002-01-01

    Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)

  13. CSL model checking of deterministic and stochastic Petri nets

    NARCIS (Netherlands)

    Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.

    2006-01-01

    Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under

  14. Electric field control of deterministic current-induced magnetization switching in a hybrid ferromagnetic/ferroelectric structure

    Science.gov (United States)

    Cai, Kaiming; Yang, Meiyin; Ju, Hailang; Wang, Sumei; Ji, Yang; Li, Baohe; Edmonds, Kevin William; Sheng, Yu; Zhang, Bao; Zhang, Nan; Liu, Shuai; Zheng, Houzhi; Wang, Kaiyou

    2017-07-01

    All-electrical and programmable manipulations of ferromagnetic bits are highly pursued for the aim of high integration and low energy consumption in modern information technology. Methods based on the spin-orbit torque switching in heavy metal/ferromagnet structures have been proposed with magnetic field, and are heading toward deterministic switching without external magnetic field. Here we demonstrate that an in-plane effective magnetic field can be induced by an electric field without breaking the symmetry of the structure of the thin film, and realize the deterministic magnetization switching in a hybrid ferromagnetic/ferroelectric structure with Pt/Co/Ni/Co/Pt layers on PMN-PT substrate. The effective magnetic field can be reversed by changing the direction of the applied electric field on the PMN-PT substrate, which fully replaces the controllability function of the external magnetic field. The electric field is found to generate an additional spin-orbit torque on the CoNiCo magnets, which is confirmed by macrospin calculations and micromagnetic simulations.

  15. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    Science.gov (United States)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  16. Deterministic indexing for packed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye

    2017-01-01

    Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...

  17. Strongly Deterministic Population Dynamics in Closed Microbial Communities

    Directory of Open Access Journals (Sweden)

    Zak Frentz

    2015-10-01

    Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.

  18. On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis.

    Science.gov (United States)

    Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Gao, Yuan; Cheng, Shaochi

    2017-07-08

    Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance.

  19. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  20. DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    MICULEAC Melania Elena

    2014-06-01

    Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.

  1. Ordinal optimization and its application to complex deterministic problems

    Science.gov (United States)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  2. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg

    2006-01-01

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology

  3. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2017-01-01

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....

  4. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....

  5. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  6. A Deterministic Annealing Approach to Clustering AIRS Data

    Science.gov (United States)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  7. The dialectical thinking about deterministic and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong

    2005-01-01

    There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)

  8. Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented

  9. Risk-based and deterministic regulation

    International Nuclear Information System (INIS)

    Fischer, L.E.; Brown, N.W.

    1995-07-01

    Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose

  10. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  11. Classification of Underlying Causes of Power Quality Disturbances: Deterministic versus Statistical Methods

    Directory of Open Access Journals (Sweden)

    Emmanouil Styvaktakis

    2007-01-01

    Full Text Available This paper presents the two main types of classification methods for power quality disturbances based on underlying causes: deterministic classification, giving an expert system as an example, and statistical classification, with support vector machines (a novel method as an example. An expert system is suitable when one has limited amount of data and sufficient power system expert knowledge; however, its application requires a set of threshold values. Statistical methods are suitable when large amount of data is available for training. Two important issues to guarantee the effectiveness of a classifier, data segmentation, and feature extraction are discussed. Segmentation of a sequence of data recording is preprocessing to partition the data into segments each representing a duration containing either an event or a transition between two events. Extraction of features is applied to each segment individually. Some useful features and their effectiveness are then discussed. Some experimental results are included for demonstrating the effectiveness of both systems. Finally, conclusions are given together with the discussion of some future research directions.

  12. Applicability of deterministic methods in seismic site effects modeling

    International Nuclear Information System (INIS)

    Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.

    2005-01-01

    The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)

  13. A deterministic model for the growth of non-conducting electrical tree structures

    International Nuclear Information System (INIS)

    Dodd, S J

    2003-01-01

    Electrical treeing is of interest to the electrical generation, transmission and distribution industries as it is one of the causes of insulation failure in electrical machines, switchgear and transformer bushings. In this paper a deterministic electrical tree growth model is described. The model is based on electrostatics and local electron avalanches to model partial discharge activity within the growing tree structure. Damage to the resin surrounding the tree structure is dependent on the local electrostatic energy dissipation by partial discharges within the tree structure and weighted by the magnitudes of the local electric fields in the resin surrounding the tree structure. The model is successful in simulating the formation of branched structures without the need of a random variable, a requirement of previous stochastic models. Instability in the spatial development of partial discharges within the tree structure takes the role of the stochastic element as used in previous models to produce branched tree structures. The simulated electrical trees conform to the experimentally observed behaviour; tree length versus time and electrical tree growth rate as a function of applied voltage for non-conducting electrical trees. The phase synchronous partial discharge activity and the spatial distribution of emitted light from the tree structure are also in agreement with experimental data for non-conducting trees as grown in a flexible epoxy resin and in polyethylene. The fact that similar tree growth behaviour is found using pure amorphous (epoxy resin) and semicrystalline (polyethylene) materials demonstrate that neither annealed or quenched noise, representing material inhomogeneity, is required for the formation of irregular branched structures (electrical trees). Instead, as shown in this paper, branched growth can occur due to the instability of individual discharges within the tree structure

  14. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.

    2014-02-01

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  15. Topology optimization under stochastic stiffness

    Science.gov (United States)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations

  16. Deterministic extraction from weak random sources

    CERN Document Server

    Gabizon, Ariel

    2011-01-01

    In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.

  17. Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing

    DEFF Research Database (Denmark)

    Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina

    2016-01-01

    Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...

  18. Deterministic and stochastic analysis of alternative climate targets under differentiated cooperation regimes

    International Nuclear Information System (INIS)

    Loulou, Richard; Labriet, Maryse; Kanudia, Amit

    2009-01-01

    This article analyzes the feasibility of attaining a variety of climate targets during the 21st century, under alternative cooperation regimes by groups of countries. Five climate targets of increasing severity are analyzed, following the EMF-22 experiment. Each target is attempted under two cooperation regimes, a First Best scenario where all countries fully cooperate from 2012 on, and a Second Best scenario where the World is partitioned into three groups, and each group of countries enters the cooperation at a different date, and implement emission abatement actions in a progressive manner, once in the coalition. The resulting ten combinations are simulated via the ETSAP-TIAM technology based, integrated assessment model. In addition to the 10 separate case analyses, the article proposes a probabilistic treatment of three targets under the First Best scenario, and shows that the three forcing targets may in fact be interpreted as a single target on global temperature change, while assuming that the climate sensitivity C s is uncertain. It is shown that such an interpretation is possible only if the probability distribution of C s is carefully chosen. The analysis of the results shows that the lowest forcing level is unattainable unless immediate coordinated action is undertaken by all countries, and even so only at a high global cost. The middle and the high forcing levels are feasible at affordable global costs, even under the Second Best scenario. Another original contribution of this article is to explain why certain combinations of technological choices are made by the model, and in particular why the climate target clearly supersedes the usually accepted objective of improving energy efficiency. The analysis shows that under some climate targets, it is not optimal to improve energy efficiency, but rather to take advantage of certain technologies that help to reach the climate objective, but that happen to be less energy efficient than even the technologies

  19. Method to deterministically study photonic nanostructures in different experimental instruments

    NARCIS (Netherlands)

    Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the

  20. A continuous variable quantum deterministic key distribution based on two-mode squeezed states

    International Nuclear Information System (INIS)

    Gong, Li-Hua; Song, Han-Chong; Liu, Ye; Zhou, Nan-Run; He, Chao-Sheng

    2014-01-01

    The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency. (paper)

  1. Experimental aspects of deterministic secure quantum key distribution

    Energy Technology Data Exchange (ETDEWEB)

    Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)

    2008-07-01

    Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.

  2. Implemented state automorphisms within the logico-algebraic approach to deterministic mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Barone, F [Naples Univ. (Italy). Ist. di Matematica della Facolta di Scienze

    1981-01-31

    The new notion of S/sub 1/-implemented state automorphism is introduced and characterized in quantum logic. Implemented pure state automorphisms are then characterized in deterministic mechanics as automorphisms of the Borel structure on the phase space.

  3. Offshore platforms and deterministic ice actions: Kashagan phase 2 development: North Caspian Sea.

    Energy Technology Data Exchange (ETDEWEB)

    Croasdale, Ken [KRCA, Calgary (Canada); Jordaan, Ian [Ian Jordaan and Associates, St John' s (Canada); Verlaan, Paul [Shell Development Kashagan, London (United Kingdom)

    2011-07-01

    The Kashagan development has to face the difficult conditions of the northern Caspian Sea. This paper investigated ice interaction scenarios and deterministic methods used on platform designs for the Kashagan development. The study presents first a review of the types of platforms in use and being designed for the Kashagan development. The various ice load scenarios and the structures used in each case are discussed. Vertical faced barriers, mobile drilling barges and sheet pile islands were used for the ice loads on vertical structures. Sloping faced barriers and islands of rock were used for the ice loads on sloping structures. Deterministic models such as the model in ISO 19906 were used to calculate the loads occurring with or without ice rubble in front of the structure. The results showed the importance of rubble build-up in front of wide structures in shallow water. Recommendations were provided for building efficient vertical and sloping faced barriers.

  4. Dynamic analysis and reliability assessment of structures with uncertain-but-bounded parameters under stochastic process excitations

    International Nuclear Information System (INIS)

    Do, Duy Minh; Gao, Wei; Song, Chongmin; Tangaramvong, Sawekchai

    2014-01-01

    This paper presents the non-deterministic dynamic analysis and reliability assessment of structures with uncertain-but-bounded parameters under stochastic process excitations. Random ground acceleration from earthquake motion is adopted to illustrate the stochastic process force. The exact change ranges of natural frequencies, random vibration displacement and stress responses of structures are investigated under the interval analysis framework. Formulations for structural reliability are developed considering the safe boundary and structural random vibration responses as interval parameters. An improved particle swarm optimization algorithm, namely randomised lower sequence initialized high-order nonlinear particle swarm optimization algorithm, is employed to capture the better bounds of structural dynamic characteristics, random vibration responses and reliability. Three numerical examples are used to demonstrate the presented method for interval random vibration analysis and reliability assessment of structures. The accuracy of the results obtained by the presented method is verified by the randomised Quasi-Monte Carlo simulation method (QMCSM) and direct Monte Carlo simulation method (MCSM). - Highlights: • Interval uncertainty is introduced into structural random vibration responses. • Interval dynamic reliability assessments of structures are implemented. • Boundaries of structural dynamic response and reliability are achieved

  5. An algebraic approach to linear-optical schemes for deterministic quantum computing

    International Nuclear Information System (INIS)

    Aniello, Paolo; Cagli, Ruben Coen

    2005-01-01

    Linear-optical passive (LOP) devices and photon counters are sufficient to implement universal quantum computation with single photons, and particular schemes have already been proposed. In this paper we discuss the link between the algebraic structure of LOP transformations and quantum computing. We first show how to decompose the Fock space of N optical modes in finite-dimensional subspaces that are suitable for encoding strings of qubits and invariant under LOP transformations (these subspaces are related to the spaces of irreducible unitary representations of U (N). Next we show how to design in algorithmic fashion LOP circuits which implement any quantum circuit deterministically. We also present some simple examples, such as the circuits implementing a cNOT gate and a Bell state generator/analyser

  6. Deterministic methods in radiation transport

    International Nuclear Information System (INIS)

    Rice, A.F.; Roussin, R.W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community

  7. Top-down fabrication of plasmonic nanostructures for deterministic coupling to single quantum emitters

    NARCIS (Netherlands)

    Pfaff, W.; Vos, A.; Hanson, R.

    2013-01-01

    Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures

  8. Design of deterministic interleaver for turbo codes

    International Nuclear Information System (INIS)

    Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.

    2008-01-01

    The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)

  9. Deterministic chaos at the ocean surface: applications and interpretations

    Directory of Open Access Journals (Sweden)

    A. J. Palmer

    1998-01-01

    Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.

  10. Proving Non-Deterministic Computations in Agda

    Directory of Open Access Journals (Sweden)

    Sergio Antoy

    2017-01-01

    Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.

  11. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    Science.gov (United States)

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  12. Deterministic effects of interventional radiology procedures

    International Nuclear Information System (INIS)

    Shope, Thomas B.

    1997-01-01

    The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)

  13. Deterministic Echo State Networks Based Stock Price Forecasting

    Directory of Open Access Journals (Sweden)

    Jingpei Dan

    2014-01-01

    Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.

  14. Deterministic chaos in the pitting phenomena of passivable alloys

    International Nuclear Information System (INIS)

    Hoerle, Stephane

    1998-01-01

    It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr

  15. Entry and exit decisions under uncertainty

    DEFF Research Database (Denmark)

    Kongsted, Hans Christian

    1996-01-01

    This paper establishes the general deterministic limit that corresponds to Dixit's model of entry and exit decisions under uncertainty. The interlinked nature of decisions is shown to be essential also in the deterministic limit. A numerical example illustrates the result......This paper establishes the general deterministic limit that corresponds to Dixit's model of entry and exit decisions under uncertainty. The interlinked nature of decisions is shown to be essential also in the deterministic limit. A numerical example illustrates the result...

  16. The probabilistic approach and the deterministic licensing procedure

    International Nuclear Information System (INIS)

    Fabian, H.; Feigel, A.; Gremm, O.

    1984-01-01

    If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)

  17. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    Directory of Open Access Journals (Sweden)

    Scott Ferrenberg

    2016-10-01

    Full Text Available Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species and belowground (species active in organic and mineral soil layers arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community and modified Winkler funnels (belowground community and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the

  18. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    Science.gov (United States)

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod

  19. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer

  20. Nonlinear dynamic analysis of atomic force microscopy under deterministic and random excitation

    International Nuclear Information System (INIS)

    Pishkenari, Hossein Nejat; Behzad, Mehdi; Meghdari, Ali

    2008-01-01

    The atomic force microscope (AFM) system has evolved into a useful tool for direct measurements of intermolecular forces with atomic-resolution characterization that can be employed in a broad spectrum of applications. This paper is devoted to the analysis of nonlinear behavior of amplitude modulation (AM) and frequency modulation (FM) modes of atomic force microscopy. For this, the microcantilever (which forms the basis for the operation of AFM) is modeled as a single mode approximation and the interaction between the sample and cantilever is derived from a van der Waals potential. Using perturbation methods such as averaging, and Fourier transform nonlinear equations of motion are analytically solved and the advantageous results are extracted from this nonlinear analysis. The results of the proposed techniques for AM-AFM, clearly depict the existence of two stable and one unstable (saddle) solutions for some of exciting parameters under deterministic vibration. The basin of attraction of two stable solutions is different and dependent on the exciting frequency. From this analysis the range of the frequency which will result in a unique periodic response can be obtained and used in practical experiments. Furthermore the analytical responses determined by perturbation techniques can be used to detect the parameter region where the chaotic motion is avoided. On the other hand for FM-AFM, the relation between frequency shift and the system parameters can be extracted and used for investigation of the system nonlinear behavior. The nonlinear behavior of the oscillating tip can easily explain the observed shift of frequency as a function of tip sample distance. Also in this paper we have investigated the AM-AFM system response under a random excitation. Using two different methods we have obtained the statistical properties of the tip motion. The results show that we can use the mean square value of tip motion to image the sample when the excitation signal is random

  1. Nonlinear dynamic analysis of atomic force microscopy under deterministic and random excitation

    Energy Technology Data Exchange (ETDEWEB)

    Pishkenari, Hossein Nejat [Center of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Behzad, Mehdi [Center of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)], E-mail: m_behzad@sharif.edu; Meghdari, Ali [Center of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2008-08-15

    The atomic force microscope (AFM) system has evolved into a useful tool for direct measurements of intermolecular forces with atomic-resolution characterization that can be employed in a broad spectrum of applications. This paper is devoted to the analysis of nonlinear behavior of amplitude modulation (AM) and frequency modulation (FM) modes of atomic force microscopy. For this, the microcantilever (which forms the basis for the operation of AFM) is modeled as a single mode approximation and the interaction between the sample and cantilever is derived from a van der Waals potential. Using perturbation methods such as averaging, and Fourier transform nonlinear equations of motion are analytically solved and the advantageous results are extracted from this nonlinear analysis. The results of the proposed techniques for AM-AFM, clearly depict the existence of two stable and one unstable (saddle) solutions for some of exciting parameters under deterministic vibration. The basin of attraction of two stable solutions is different and dependent on the exciting frequency. From this analysis the range of the frequency which will result in a unique periodic response can be obtained and used in practical experiments. Furthermore the analytical responses determined by perturbation techniques can be used to detect the parameter region where the chaotic motion is avoided. On the other hand for FM-AFM, the relation between frequency shift and the system parameters can be extracted and used for investigation of the system nonlinear behavior. The nonlinear behavior of the oscillating tip can easily explain the observed shift of frequency as a function of tip sample distance. Also in this paper we have investigated the AM-AFM system response under a random excitation. Using two different methods we have obtained the statistical properties of the tip motion. The results show that we can use the mean square value of tip motion to image the sample when the excitation signal is random.

  2. A Deterministic Safety Assessment of a Pyro-processed Waste Repository

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae; Choi, Jong Won

    2012-01-01

    A GoldSim template program for a safety assessment of a hybrid-typed repository system, called 'A-KRS', in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been deterministically assessed with 5 various normal and abnormal scenarios associated with nuclide release and transport in and around the repository. Dose exposure rates to the farming exposure group have been evaluated in accordance with all the scenarios and then compared among other.

  3. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated...

  4. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  5. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  6. Shock-induced explosive chemistry in a deterministic sample configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith

    2005-10-01

    Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.

  7. Deterministic chaos and fractal complexity in the dynamics of cardiovascular behavior: perspectives on a new frontier.

    Science.gov (United States)

    Sharma, Vijay

    2009-09-10

    Physiological systems such as the cardiovascular system are capable of five kinds of behavior: equilibrium, periodicity, quasi-periodicity, deterministic chaos and random behavior. Systems adopt one or more these behaviors depending on the function they have evolved to perform. The emerging mathematical concepts of fractal mathematics and chaos theory are extending our ability to study physiological behavior. Fractal geometry is observed in the physical structure of pathways, networks and macroscopic structures such the vasculature and the His-Purkinje network of the heart. Fractal structure is also observed in processes in time, such as heart rate variability. Chaos theory describes the underlying dynamics of the system, and chaotic behavior is also observed at many levels, from effector molecules in the cell to heart function and blood pressure. This review discusses the role of fractal structure and chaos in the cardiovascular system at the level of the heart and blood vessels, and at the cellular level. Key functional consequences of these phenomena are highlighted, and a perspective provided on the possible evolutionary origins of chaotic behavior and fractal structure. The discussion is non-mathematical with an emphasis on the key underlying concepts.

  8. Contribution of the deterministic approach to the characterization of seismic input

    International Nuclear Information System (INIS)

    Panza, G.F.; Romanelli, F.; Vaccari, F.; Decanini, L.; Mollaioli, F.

    1999-10-01

    Traditional methods use either a deterministic or a probabilistic approach, based on empirically derived laws for ground motion attenuation. The realistic definition of seismic input can be performed by means of advanced modelling codes based on the modal summation technique. These codes and their extension to laterally heterogeneous structures allow us to accurately calculate synthetic signals, complete of body waves and of surface waves, corresponding to different source and anelastic structural models, taking into account the effect of local geological conditions. This deterministic approach is capable to address some aspects largely overlooked in the probabilistic approach: (a) the effect of crustal properties on attenuation are not neglected; (b) the ground motion parameters are derived from synthetic time histories. and not from overly simplified attenuation functions; (c) the resulting maps are in terms of design parameters directly, and do not require the adaptation of probabilistic maps to design ground motions; and (d) such maps address the issue of the deterministic definition of ground motion in a way which permits the generalization of design parameters to locations where there is little seismic history. The methodology has been applied to a large part of south-eastern Europe, in the framework of the EU-COPERNICUS project 'Quantitative Seismic Zoning of the Circum Pannonian Region'. Maps of various seismic hazard parameters numerically modelled, and whenever possible tested against observations, such as peak ground displacement, velocity and acceleration, of practical use for the design of earthquake-safe structures, have been produced. The results of a standard probabilistic approach are compared with the findings based on the deterministic approach. A good agreement is obtained except for the Vrancea (Romania) zone, where the attenuation relations used in the probabilistic approach seem to underestimate, mainly at large distances, the seismic hazard

  9. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  10. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54

  11. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    CADIS and also analog MC simulations, the FW-CADIS drastically improved MC dose rate calculations in quality as well in quantity. Large shielding problems such as portions and complete PWR facility require not only extensive computational resources but also understanding of the underlying physics, which is inevitable in interpreting results of hybrid deterministic-stochastic methodology. (authors)

  12. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  13. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  14. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2016-01-01

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  15. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  16. Recognition of deterministic ETOL languages in logarithmic space

    DEFF Research Database (Denmark)

    Jones, Neil D.; Skyum, Sven

    1977-01-01

    It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...

  17. Pseudo-random number generator based on asymptotic deterministic randomness

    Science.gov (United States)

    Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming

    2008-06-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.

  18. Pseudo-random number generator based on asymptotic deterministic randomness

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming

    2008-01-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks

  19. Equivalence relations between deterministic and quantum mechanical systems

    International Nuclear Information System (INIS)

    Hooft, G.

    1988-01-01

    Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale

  20. Operational State Complexity of Deterministic Unranked Tree Automata

    Directory of Open Access Journals (Sweden)

    Xiaoxue Piao

    2010-08-01

    Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.

  1. MIMO capacity for deterministic channel models: sublinear growth

    DEFF Research Database (Denmark)

    Bentosela, Francois; Cornean, Horia; Marchetti, Nicola

    2013-01-01

    . In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....

  2. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  3. RBE for deterministic effects

    International Nuclear Information System (INIS)

    1990-01-01

    In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)

  4. A mathematical theory for deterministic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)

    2007-05-15

    Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.

  5. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    Science.gov (United States)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  6. Deterministic integer multiple firing depending on initial state in Wang model

    Energy Technology Data Exchange (ETDEWEB)

    Xie Yong [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: yxie@mail.xjtu.edu.cn; Xu Jianxue [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang Jun [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)

    2006-12-15

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables.

  7. Deterministic integer multiple firing depending on initial state in Wang model

    International Nuclear Information System (INIS)

    Xie Yong; Xu Jianxue; Jiang Jun

    2006-01-01

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables

  8. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.

    2005-01-01

    Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)

  9. Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)

    International Nuclear Information System (INIS)

    2014-01-01

    The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References

  10. Deterministic dense coding with partially entangled states

    Science.gov (United States)

    Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni

    2005-01-01

    The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.

  11. Optimal Deterministic Investment Strategies for Insurers

    Directory of Open Access Journals (Sweden)

    Ulrich Rieder

    2013-11-01

    Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.

  12. Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly

    Directory of Open Access Journals (Sweden)

    Mehdi Sharifi

    2017-12-01

    Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important. 

  13. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.

  14. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    Science.gov (United States)

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more

  15. Deterministic Diffusion in Delayed Coupled Maps

    International Nuclear Information System (INIS)

    Sozanski, M.

    2005-01-01

    Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)

  16. Deterministic hydrodynamics: Taking blood apart

    Science.gov (United States)

    Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.

    2006-10-01

    We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication

  17. Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia.

    Science.gov (United States)

    Sihvonen, Aleksi J; Ripollés, Pablo; Särkämö, Teppo; Leo, Vera; Rodríguez-Fornells, Antoni; Saunavaara, Jani; Parkkola, Riitta; Soinila, Seppo

    2017-12-01

    Acquired amusia provides a unique opportunity to investigate the fundamental neural architectures of musical processing due to the transition from a functioning to defective music processing system. Yet, the white matter (WM) deficits in amusia remain systematically unexplored. To evaluate which WM structures form the neural basis for acquired amusia and its recovery, we studied 42 stroke patients longitudinally at acute, 3-month, and 6-month post-stroke stages using DTI [tract-based spatial statistics (TBSS) and deterministic tractography (DT)] and the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA). Non-recovered amusia was associated with structural damage and subsequent degeneration in multiple WM tracts including the right inferior fronto-occipital fasciculus (IFOF), arcuate fasciculus (AF), inferior longitudinal fasciculus (ILF), uncinate fasciculus (UF), and frontal aslant tract (FAT), as well as in the corpus callosum (CC) and its posterior part (tapetum). In a linear regression analysis, the volume of the right IFOF was the main predictor of MBEA performance across time. Overall, our results provide a comprehensive picture of the large-scale deficits in intra- and interhemispheric structural connectivity underlying amusia, and conversely highlight which pathways are crucial for normal music perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.

    2004-01-01

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  19. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  20. Deterministic and stochastic CTMC models from Zika disease transmission

    Science.gov (United States)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  1. ICRP (1991) and deterministic effects

    International Nuclear Information System (INIS)

    Mole, R.H.

    1992-01-01

    A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)

  2. When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.

    Science.gov (United States)

    Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko

    2015-08-01

    When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Deterministic and Probabilistic Serviceability Assessment of Footbridge Vibrations due to a Single Walker Crossing

    Directory of Open Access Journals (Sweden)

    Cristoforo Demartino

    2018-01-01

    Full Text Available This paper presents a numerical study on the deterministic and probabilistic serviceability assessment of footbridge vibrations due to a single walker crossing. The dynamic response of the footbridge is analyzed by means of modal analysis, considering only the first lateral and vertical modes. Single span footbridges with uniform mass distribution are considered, with different values of the span length, natural frequencies, mass, and structural damping and with different support conditions. The load induced by a single walker crossing the footbridge is modeled as a moving sinusoidal force either in the lateral or in the vertical direction. The variability of the characteristics of the load induced by walkers is modeled using probability distributions taken from the literature defining a Standard Population of walkers. Deterministic and probabilistic approaches were adopted to assess the peak response. Based on the results of the simulations, deterministic and probabilistic vibration serviceability assessment methods are proposed, not requiring numerical analyses. Finally, an example of the application of the proposed method to a truss steel footbridge is presented. The results highlight the advantages of the probabilistic procedure in terms of reliability quantification.

  4. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  5. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  6. Deterministic nonlinear phase gates induced by a single qubit

    Science.gov (United States)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  7. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    Science.gov (United States)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch

  8. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  9. Deterministic nonlinear systems a short course

    CERN Document Server

    Anishchenko, Vadim S; Strelkova, Galina I

    2014-01-01

    This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems.  This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.

  10. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  11. Learning to Act: Qualitative Learning of Deterministic Action Models

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2017-01-01

    In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...

  12. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    International Nuclear Information System (INIS)

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-01-01

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching

  13. Det-WiFi: A Multihop TDMA MAC Implementation for Industrial Deterministic Applications Based on Commodity 802.11 Hardware

    Directory of Open Access Journals (Sweden)

    Yujun Cheng

    2017-01-01

    Full Text Available Wireless control system for industrial automation has been gaining increasing popularity in recent years thanks to their ease of deployment and the low cost of their components. However, traditional low sample rate industrial wireless sensor networks cannot support high-speed application, while high-speed IEEE 802.11 networks are not designed for real-time application and not able to provide deterministic feature. Thus, in this paper, we propose Det-WiFi, a real-time TDMA MAC implementation for high-speed multihop industrial application. It is able to support high-speed applications and provide deterministic network features since it combines the advantages of high-speed IEEE802.11 physical layer and a software Time Division Multiple Access (TDMA based MAC layer. We implement Det-WiFi on commercial off-the-shelf hardware and compare the deterministic performance between 802.11s and Det-WiFi under the real industrial environment, which is full of field devices and industrial equipment. We changed the hop number and the packet payload size in each experiment, and all of the results show that Det-WiFi has better deterministic performance.

  14. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    Science.gov (United States)

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  15. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  16. Accessing the dark exciton spin in deterministic quantum-dot microlenses

    Science.gov (United States)

    Heindel, Tobias; Thoma, Alexander; Schwartz, Ido; Schmidgall, Emma R.; Gantz, Liron; Cogan, Dan; Strauß, Max; Schnauber, Peter; Gschrey, Manuel; Schulze, Jan-Hindrik; Strittmatter, Andre; Rodt, Sven; Gershoni, David; Reitzenstein, Stephan

    2017-12-01

    The dark exciton state in semiconductor quantum dots (QDs) constitutes a long-lived solid-state qubit which has the potential to play an important role in implementations of solid-state-based quantum information architectures. In this work, we exploit deterministically fabricated QD microlenses which promise enhanced photon extraction, to optically prepare and read out the dark exciton spin and observe its coherent precession. The optical access to the dark exciton is provided via spin-blockaded metastable biexciton states acting as heralding states, which are identified by deploying polarization-sensitive spectroscopy as well as time-resolved photon cross-correlation experiments. Our experiments reveal a spin-precession period of the dark exciton of (0.82 ± 0.01) ns corresponding to a fine-structure splitting of (5.0 ± 0.7) μeV between its eigenstates |↑ ⇑ ±↓ ⇓ ⟩. By exploiting microlenses deterministically fabricated above pre-selected QDs, our work demonstrates the possibility to scale up implementations of quantum information processing schemes using the QD-confined dark exciton spin qubit, such as the generation of photonic cluster states or the realization of a solid-state-based quantum memory.

  17. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    International Nuclear Information System (INIS)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua

    2010-01-01

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  18. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com

    2010-04-15

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  19. The State of Deterministic Thinking among Mothers of Autistic Children

    Directory of Open Access Journals (Sweden)

    Mehrnoush Esbati

    2011-10-01

    Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.

  20. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  1. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  2. Deterministic geologic processes and stochastic modeling

    International Nuclear Information System (INIS)

    Rautman, C.A.; Flint, A.L.

    1992-01-01

    This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling

  3. Deterministic one-way simulation of two-way, real-time cellular automata and its related problems

    Energy Technology Data Exchange (ETDEWEB)

    Umeo, H; Morita, K; Sugata, K

    1982-06-13

    The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.

  4. Assessment of fusion facility dose rate map using mesh adaptivity enhancements of hybrid Monte Carlo/deterministic techniques

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.

    2014-01-01

    Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer

  5. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    International Nuclear Information System (INIS)

    Zhu, T; Finlay, J; Mesina, C; Liu, H

    2014-01-01

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium

  6. Stochastic Simulation of Integrated Circuits with Nonlinear Black-Box Components via Augmented Deterministic Equivalents

    Directory of Open Access Journals (Sweden)

    MANFREDI, P.

    2014-11-01

    Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.

  7. Control-oriented approaches to anticipating synchronization of chaotic deterministic ratchets

    Energy Technology Data Exchange (ETDEWEB)

    Xu Shiyun [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)], E-mail: xushiyun@pku.edu.cn; Yang Ying [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)], E-mail: yy@mech.pku.edu.cn; Song Lei [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)

    2009-06-15

    In virtue of techniques derived from nonlinear control system theory, we establish conditions under which one could obtain anticipating synchronization between two periodically driven deterministic ratchets that are able to exhibit directed transport with a finite velocity. Criteria are established in order to guarantee the anticipating synchronization property of such systems as well as characterize phase space dynamics of the ratchet transporting behaviors. These results allow one to predict the chaotic direct transport features of particles on a ratchet potential using a copy of the same system that performs as a slave, which are verified through numerical simulation.

  8. Local deterministic theory surviving the violation of Bell's inequalities

    International Nuclear Information System (INIS)

    Cormier-Delanoue, C.

    1984-01-01

    Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr

  9. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  10. Model-based stochastic-deterministic State and Force Estimation using Kalman filtering with Application to Hanko-1 Channel Marker

    OpenAIRE

    Petersen, Øyvind Wiig

    2014-01-01

    Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...

  11. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    Science.gov (United States)

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  12. Anti-deterministic behaviour of discrete systems that are less predictable than noise

    Science.gov (United States)

    Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.

    2005-05-01

    We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.

  13. Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle

    2002-01-01

    In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....

  14. Deterministic dynamics of plasma focus discharges

    International Nuclear Information System (INIS)

    Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.

    1992-04-01

    The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs

  15. On the implementation of a deterministic secure coding protocol using polarization entangled photons

    OpenAIRE

    Ostermeyer, Martin; Walenta, Nino

    2007-01-01

    We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...

  16. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  17. The detection and stabilisation of limit cycle for deterministic finite automata

    Science.gov (United States)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  18. Deterministic secure communication protocol without using entanglement

    OpenAIRE

    Cai, Qing-yu

    2003-01-01

    We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.

  19. Relative Roles of Deterministic and Stochastic Processes in Driving the Vertical Distribution of Bacterial Communities in a Permafrost Core from the Qinghai-Tibet Plateau, China.

    Science.gov (United States)

    Hu, Weigang; Zhang, Qi; Tian, Tian; Li, Dingyao; Cheng, Gang; Mu, Jing; Wu, Qingbai; Niu, Fujun; Stegen, James C; An, Lizhe; Feng, Huyuan

    2015-01-01

    Understanding the processes that influence the structure of biotic communities is one of the major ecological topics, and both stochastic and deterministic processes are expected to be at work simultaneously in most communities. Here, we investigated the vertical distribution patterns of bacterial communities in a 10-m-long soil core taken within permafrost of the Qinghai-Tibet Plateau. To get a better understanding of the forces that govern these patterns, we examined the diversity and structure of bacterial communities, and the change in community composition along the vertical distance (spatial turnover) from both taxonomic and phylogenetic perspectives. Measures of taxonomic and phylogenetic beta diversity revealed that bacterial community composition changed continuously along the soil core, and showed a vertical distance-decay relationship. Multiple stepwise regression analysis suggested that bacterial alpha diversity and phylogenetic structure were strongly correlated with soil conductivity and pH but weakly correlated with depth. There was evidence that deterministic and stochastic processes collectively drived bacterial vertically-structured pattern. Bacterial communities in five soil horizons (two originated from the active layer and three from permafrost) of the permafrost core were phylogenetically random, indicator of stochastic processes. However, we found a stronger effect of deterministic processes related to soil pH, conductivity, and organic carbon content that were structuring the bacterial communities. We therefore conclude that the vertical distribution of bacterial communities was governed primarily by deterministic ecological selection, although stochastic processes were also at work. Furthermore, the strong impact of environmental conditions (for example, soil physicochemical parameters and seasonal freeze-thaw cycles) on these communities underlines the sensitivity of permafrost microorganisms to climate change and potentially subsequent

  20. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  1. A deterministic model predicts the properties of stochastic calcium oscillations in airway smooth muscle cells.

    Science.gov (United States)

    Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James

    2014-08-01

    The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.

  2. Distinguishing deterministic and noise components in ELM time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N

    2004-01-01

    Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type

  3. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  4. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    Science.gov (United States)

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the

  5. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

  6. Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M [ORNL; Kisner, Roger A [ORNL; Muhlheim, Michael David [ORNL; Fugate, David L [ORNL

    2015-07-01

    Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The

  7. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    Science.gov (United States)

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual

  8. Development of a model for unsteady deterministic stresses adapted to the multi-stages turbomachines simulation; Developpement d'un modele de tensions deterministes instationnaires adapte a la simulation de turbomachines multi-etagees

    Energy Technology Data Exchange (ETDEWEB)

    Charbonnier, D.

    2004-12-15

    The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)

  9. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    International Nuclear Information System (INIS)

    Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing

    2014-01-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)

  10. Towards deterministic optical quantum computation with coherently driven atomic ensembles

    International Nuclear Information System (INIS)

    Petrosyan, David

    2005-01-01

    Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons

  11. Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region

    International Nuclear Information System (INIS)

    Panza, G.F.; Vaccari, F.; Kouteva, M.

    2008-03-01

    A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)

  12. Stick-Slip Analysis of a Drill String Subjected to Deterministic Excitation and Stochastic Excitation

    Directory of Open Access Journals (Sweden)

    Hongyuan Qiu

    2016-01-01

    Full Text Available Using a finite element model, this paper investigates the torsional vibration of a drill string under combined deterministic excitation and random excitation. The random excitation is caused by the random friction coefficients between the drill bit and the bottom of the hole and assumed as white noise. Simulation shows that the responses under random excitation become random too, and the probabilistic distribution of the responses at each discretized time instant is obtained. The two points, entering and leaving the stick stage, are examined with special attention. The results indicate that the two points become random under random excitation, and the distributions are not normal even when the excitation is assumed as Gaussian white noise.

  13. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  14. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Science.gov (United States)

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  15. Configurable Crossbar Switch for Deterministic, Low-latency Inter-blade Communications in a MicroTCA Platform

    Energy Technology Data Exchange (ETDEWEB)

    Karamooz, Saeed [Vadatech Inc. (United States); Breeding, John Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Justice, T Alan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-01

    As MicroTCA expands into applications beyond the telecommunications industry from which it originated, it faces new challenges in the area of inter-blade communications. The ability to achieve deterministic, low-latency communications between blades is critical to realizing a scalable architecture. In the past, legacy bus architectures accomplished inter-blade communications using dedicated parallel buses across the backplane. Because of limited fabric resources on its backplane, MicroTCA uses the carrier hub (MCH) for this purpose. Unfortunately, MCH products from commercial vendors are limited to standard bus protocols such as PCI Express, Serial Rapid IO and 10/40GbE. While these protocols have exceptional throughput capability, they are neither deterministic nor necessarily low-latency. To overcome this limitation, an MCH has been developed based on the Xilinx Virtex-7 690T FPGA. This MCH provides the system architect/developer complete flexibility in both the interface protocol and routing of information between blades. In this paper, we present the application of this configurable MCH concept to the Machine Protection System under development for the Spallation Neutron Sources's proton accelerator. Specifically, we demonstrate the use of the configurable MCH as a 12x4-lane crossbar switch using the Aurora protocol to achieve a deterministic, low-latency data link. In this configuration, the crossbar has an aggregate bandwidth of 48 GB/s.

  16. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  17. Deterministic nanoparticle assemblies: from substrate to solution

    International Nuclear Information System (INIS)

    Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J

    2014-01-01

    The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)

  18. Deterministic hazard quotients (HQs): Heading down the wrong road

    International Nuclear Information System (INIS)

    Wilde, L.; Hunter, C.; Simpson, J.

    1995-01-01

    The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches

  19. On Notions of Security for Deterministic Encryption, and Efficient Constructions Without Random Oracles

    NARCIS (Netherlands)

    S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner

    2008-01-01

    textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic

  20. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    Science.gov (United States)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  1. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    Science.gov (United States)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical Pm

  2. Phase conjugation with random fields and with deterministic and random scatterers

    International Nuclear Information System (INIS)

    Gbur, G.; Wolf, E.

    1999-01-01

    The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. copyright 1999 Optical Society of America

  3. Reliability assessment of Indian Point Unit 3 containment structure under combined loads

    International Nuclear Information System (INIS)

    Hwang, H.; Shinozuka, M.; Kawakami, J.; Reich, M.

    1984-01-01

    In the current design criteria, the load combinations specified for design of concrete containment structures are in the deterministic format. However, by applying the probability-based reliability analysis method developed by BNL to the concrete containment structures designed according to the criteria, it is possible to evaluate the reliability levels implied in the current design criteria. For this purpose, the reliability analysis is applied to the Indian Point Unit No. 3 containment. The details of the containment structure such as the geometries and the rebar arrangements, etc., are taken from the working drawings and the Final Safety Analysis Report. Three kinds of loads are considered in the reliability analysis. They are, dead load, accidental pressure due to a large LOCA, and earthquake ground acceleration. This paper presents the reliability analysis results of the Indian Point Unit 3 containment subjected to all combinations of loads

  4. A Numerical Simulation for a Deterministic Compartmental ...

    African Journals Online (AJOL)

    In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...

  5. Overcoming a limitation of deterministic dense coding with a nonmaximally entangled initial state

    International Nuclear Information System (INIS)

    Bourdon, P. S.; Gerjuoy, E.

    2010-01-01

    Under two-party deterministic dense coding, Alice communicates (perfectly distinguishable) messages to Bob via a qudit from a pair of entangled qudits in pure state |Ψ>. If |Ψ> represents a maximally entangled state (i.e., each of its Schmidt coefficients is √(1/d)), then Alice can convey to Bob one of d 2 distinct messages. If |Ψ> is not maximally entangled, then Ji et al. [Phys. Rev. A 73, 034307 (2006)] have shown that under the original deterministic dense-coding protocol, in which messages are encoded by unitary operations performed on Alice's qudit, it is impossible to encode d 2 -1 messages. Encoding d 2 -2 messages is possible; see, for example, the numerical studies by Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. Answering a question raised by Wu et al. [Phys. Rev. A 73, 042311 (2006)], we show that when |Ψ> is not maximally entangled, the communications limit of d 2 -2 messages persists even when the requirement that Alice encode by unitary operations on her qudit is weakened to allow encoding by more general quantum operators. We then describe a dense-coding protocol that can overcome this limitation with high probability, assuming the largest Schmidt coefficient of |Ψ> is sufficiently close to √(1/d). In this protocol, d 2 -2 of the messages are encoded via unitary operations on Alice's qudit, and the final (d 2 -1)-th message is encoded via a non-trace-preserving quantum operation.

  6. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  7. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  8. Classification and unification of the microscopic deterministic traffic models.

    Science.gov (United States)

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  9. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Harrisson, G.; Marleau, G.

    2012-01-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  10. Are deterministic methods suitable for short term reserve planning?

    International Nuclear Information System (INIS)

    Voorspools, Kris R.; D'haeseleer, William D.

    2005-01-01

    Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods

  11. The Relation between Deterministic Thinking and Mental Health among Substance Abusers Involved in a Rehabilitation Program

    Directory of Open Access Journals (Sweden)

    Seyed Jalal Younesi

    2015-06-01

    Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of  cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.

  12. Deterministic chaos in entangled eigenstates

    Science.gov (United States)

    Schlegel, K. G.; Förster, S.

    2008-05-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  13. Deterministic chaos in entangled eigenstates

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)

    2008-05-12

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  14. Deterministic chaos in entangled eigenstates

    International Nuclear Information System (INIS)

    Schlegel, K.G.; Foerster, S.

    2008-01-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator

  15. Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves

    DEFF Research Database (Denmark)

    Eldeberky, Y.; Madsen, Per A.

    1999-01-01

    and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...

  16. Analysis of pinching in deterministic particle separation

    Science.gov (United States)

    Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German

    2011-11-01

    We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.

  17. One-step deterministic multipartite entanglement purification with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)

    2012-01-09

    We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.

  18. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  19. Safety margins in deterministic safety analysis

    International Nuclear Information System (INIS)

    Viktorov, A.

    2011-01-01

    The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)

  20. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    Science.gov (United States)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  1. A deterministic approach for performance assessment and optimization of power distribution units in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.

    2009-01-01

    This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran

  2. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  3. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  4. Method to deterministically study photonic nanostructures in different experimental instruments.

    Science.gov (United States)

    Husken, B H; Woldering, L A; Blum, C; Vos, W L

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.

  5. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  6. Study on deterministic response time design for a class of nuclear Instrumentation and Control systems

    International Nuclear Information System (INIS)

    Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long

    2012-01-01

    Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.

  7. Deterministic Brownian motion generated from differential delay equations.

    Science.gov (United States)

    Lei, Jinzhi; Mackey, Michael C

    2011-10-01

    This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.

  8. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  9. Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application.

    Science.gov (United States)

    Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V

    2017-03-01

    Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.

  10. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  11. Deterministic models for energy-loss straggling

    International Nuclear Information System (INIS)

    Prinja, A.K.; Gleicher, F.; Dunham, G.; Morel, J.E.

    1999-01-01

    Inelastic ion interactions with target electrons are dominated by extremely small energy transfers that are difficult to resolve numerically. The continuous-slowing-down (CSD) approximation is then commonly employed, which, however, only preserves the mean energy loss per collision through the stopping power, S(E) = ∫ 0 ∞ dEprime (E minus Eprime) σ s (E → Eprime). To accommodate energy loss straggling, a Gaussian distribution with the correct mean-squared energy loss (akin to a Fokker-Planck approximation in energy) is commonly used in continuous-energy Monte Carlo codes. Although this model has the unphysical feature that ions can be upscattered, it nevertheless yields accurate results. A multigroup model for energy loss straggling was recently presented for use in multigroup Monte Carlo codes or in deterministic codes that use multigroup data. The method has the advantage that the mean and mean-squared energy loss are preserved without unphysical upscatter and hence is computationally efficient. Results for energy spectra compared extremely well with Gaussian distributions under the idealized conditions for which the Gaussian may be considered to be exact. Here, the authors present more consistent comparisons by extending the method to accommodate upscatter and, further, compare both methods with exact solutions obtained from an analog Monte Carlo simulation, for a straight-ahead transport problem

  12. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  13. A deterministic evaluation of heat stress mitigation and feed cost under climate change within the smallholder dairy sector.

    Science.gov (United States)

    York, L; Heffernan, C; Rymer, C; Panda, N

    2017-05-01

    In the global South, dairying is often promoted as a means of poverty alleviation. Yet, under conditions of climate warming, little is known regarding the ability of small-scale dairy producers to maintain production and/or the robustness of possible adaptation options in meeting the challenges presented, particularly heat stress. The authors created a simple, deterministic model to explore the influence of breed and heat stress relief options on smallholder dairy farmers in Odisha, India. Breeds included indigenous Indian (non-descript), low-grade Jersey crossbreed and high-grade Jersey crossbreed. Relief strategies included providing shade, fanning and bathing. The impact of predicted critical global climate parameters, a 2°C and 4°C temperature rise were explored. A feed price scenario was modelled to illustrate the importance of feed in impact estimation. Feed costs were increased by 10% to 30%. Across the simulations, high-grade Jersey crossbreeds maintained higher milk yields, despite being the most sensitive to the negative effects of temperature. Low-capital relief strategies were the most effective at reducing heat stress impacts on household income. However, as feed costs increased the lower-grade Jersey crossbreed became the most profitable breed. The high-grade Jersey crossbreed was only marginally (4.64%) more profitable than the indigenous breed. The results demonstrate the importance of understanding the factors and practical trade-offs that underpin adaptation. The model also highlights the need for hot-climate dairying projects and programmes to consider animal genetic resources alongside environmentally sustainable adaptation measures for greatest poverty impact.

  14. Exploring the stochastic and deterministic aspects of cyclic emission variability on a high speed spark-ignition engine

    International Nuclear Information System (INIS)

    Karvountzis-Kontakiotis, A.; Dimaratos, A.; Ntziachristos, L.; Samaras, Z.

    2017-01-01

    This study contributes to the understanding of cycle-to-cycle emissions variability (CEV) in premixed spark-ignition combustion engines. A number of experimental investigations of cycle-to-cycle combustion variability (CCV) exist in published literature; however only a handful of studies deal with CEV. This study experimentally investigates the impact of CCV on CEV of NO and CO, utilizing experimental results from a high-speed spark-ignition engine. Both CEV and CCV are shown to comprise a deterministic and a stochastic component. Results show that at maximum break torque (MBT) operation, the indicated mean effective pressure (IMEP) maximizes and its coefficient of variation (COV_I_M_E_P) minimizes, leading to minimum variation of NO. NO variability and hence mean NO levels can be reduced by more than 50% and 30%, respectively, at advanced ignition timing, by controlling the deterministic CCV using cycle resolved combustion control. The deterministic component of CEV increases at lean combustion (lambda = 1.12) and this overall increases NO variability. CEV was also found to decrease with engine load. At steady speed, increasing throttle position from 20% to 80%, decreased COV_I_M_E_P, COV_N_O and COV_C_O by 59%, 46%, and 6% respectively. Highly resolved engine control, by means of cycle-to-cycle combustion control, appears as key to limit the deterministic feature of cyclic variability and by that to overall reduce emission levels. - Highlights: • Engine emissions variability comprise both stochastic and deterministic components. • Lean and diluted combustion conditions increase emissions variability. • Advanced ignition timing enhances the deterministic component of variability. • Load increase decreases the deterministic component of variability. • The deterministic component can be reduced by highly resolved combustion control.

  15. Empirical and deterministic accuracies of across-population genomic prediction

    NARCIS (Netherlands)

    Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.

    2015-01-01

    Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which

  16. Practical applications of probabilistic structural reliability analyses to primary pressure systems of nuclear power plants

    International Nuclear Information System (INIS)

    Witt, F.J.

    1980-01-01

    Primary pressure systems of nuclear power plants are built to exacting codes and standards with provisions for inservice inspection and repair if necessary. Analyses and experiments have demonstrated by deterministic means that very large margins exist on safety impacting failures under normal operating and upset conditions. Probabilistic structural reliability analyses provide additional support that failures of significance are very, very remote. They may range in degree of sophistication from very simple calculations to very complex computer analyses involving highly developed mathematical techniques. The end result however should be consistent with the desired usage. In this paper a probabilistic structural reliability analysis is performed as a supplement to in-depth deterministic evaluations with the primary objective to demonstrate an acceptably low probability of failure for the conditions considered. (author)

  17. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    Science.gov (United States)

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  18. Structural Optimization with Reliability Constraints

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1986-01-01

    During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...... is formulated in section 3. The formulation is a natural extension of the commonly used formulations in determinstic structural optimization. The mathematical form of the optimization problem is briefly discussed. In section 4 two new optimization procedures especially designed for the reliability...

  19. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  20. SU-C-BRC-03: Development of a Novel Strategy for On-Demand Monte Carlo and Deterministic Dose Calculation Treatment Planning and Optimization for External Beam Photon and Particle Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Y M; Bush, K; Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2016-06-15

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high

  1. SU-C-BRC-03: Development of a Novel Strategy for On-Demand Monte Carlo and Deterministic Dose Calculation Treatment Planning and Optimization for External Beam Photon and Particle Therapy

    International Nuclear Information System (INIS)

    Yang, Y M; Bush, K; Han, B; Xing, L

    2016-01-01

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high

  2. Mixed motion in deterministic ratchets due to anisotropic permeability

    NARCIS (Netherlands)

    Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.

    2011-01-01

    Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle

  3. Deterministic blade row interactions in a centrifugal compressor stage

    Science.gov (United States)

    Kirtley, K. R.; Beach, T. A.

    1991-01-01

    The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.

  4. A study of deterministic models for quantum mechanics

    International Nuclear Information System (INIS)

    Sutherland, R.

    1980-01-01

    A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement

  5. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    Science.gov (United States)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  6. Deterministic bound for avionics switched networks according to networking features using network calculus

    Directory of Open Access Journals (Sweden)

    Feng HE

    2017-12-01

    Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15–20%, which just coincides with the statistical data (18–22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks

  7. Enhancing the photon-extraction efficiency of site-controlled quantum dots by deterministically fabricated microlenses

    Science.gov (United States)

    Kaganskiy, Arsenty; Fischbach, Sarah; Strittmatter, André; Rodt, Sven; Heindel, Tobias; Reitzenstein, Stephan

    2018-04-01

    We report on the realization of scalable single-photon sources (SPSs) based on single site-controlled quantum dots (SCQDs) and deterministically fabricated microlenses. The fabrication process comprises the buried-stressor growth technique complemented with low-temperature in-situ electron-beam lithography for the integration of SCQDs into microlens structures with high yield and high alignment accuracy. The microlens-approach leads to a broadband enhancement of the photon-extraction efficiency of up to (21 ± 2)% and a high suppression of multi-photon events with g (2)(τ = 0) SPSs which, can be applied in photonic quantum circuits and advanced quantum computation schemes.

  8. Deterministic Chaos - Complex Chance out of Simple Necessity ...

    Indian Academy of Sciences (India)

    This is a very lucid and lively book on deterministic chaos. Chaos is very common in nature. However, the understanding and realisation of its potential applications is very recent. Thus this book is a timely addition to the subject. There are several books on chaos and several more are being added every day. In spite of this ...

  9. Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Niels Jacob

    1994-01-01

    An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...

  10. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  11. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  12. Structuring evolution: biochemical networks and metabolic diversification in birds.

    Science.gov (United States)

    Morrison, Erin S; Badyaev, Alexander V

    2016-08-25

    Recurrence and predictability of evolution are thought to reflect the correspondence between genomic and phenotypic dimensions of organisms, and the connectivity in deterministic networks within these dimensions. Direct examination of the correspondence between opportunities for diversification imbedded in such networks and realized diversity is illuminating, but is empirically challenging because both the deterministic networks and phenotypic diversity are modified in the course of evolution. Here we overcome this problem by directly comparing the structure of a "global" carotenoid network - comprising of all known enzymatic reactions among naturally occurring carotenoids - with the patterns of evolutionary diversification in carotenoid-producing metabolic networks utilized by birds. We found that phenotypic diversification in carotenoid networks across 250 species was closely associated with enzymatic connectivity of the underlying biochemical network - compounds with greater connectivity occurred the most frequently across species and were the hotspots of metabolic pathway diversification. In contrast, we found no evidence for diversification along the metabolic pathways, corroborating findings that the utilization of the global carotenoid network was not strongly influenced by history in avian evolution. The finding that the diversification in species-specific carotenoid networks is qualitatively predictable from the connectivity of the underlying enzymatic network points to significant structural determinism in phenotypic evolution.

  13. Deterministic dense coding and entanglement entropy

    International Nuclear Information System (INIS)

    Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.

    2008-01-01

    We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1

  14. Deterministic and Stochastic Study of Wind Farm Harmonic Currents

    DEFF Research Database (Denmark)

    Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...

  15. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2017-01-01

    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...

  16. Deterministic teleportation using single-photon entanglement as a resource

    DEFF Research Database (Denmark)

    Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.

    2012-01-01

    We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...

  17. Deterministic entanglement purification and complete nonlocal Bell-state analysis with hyperentanglement

    International Nuclear Information System (INIS)

    Sheng Yubo; Deng Fuguo

    2010-01-01

    Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.

  18. A deterministic-probabilistic model for contaminant transport. User manual

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, F W; Crowe, A

    1980-08-01

    This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)

  19. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  20. Deterministic thermostats, theories of nonequilibrium systems and parallels with the ergodic condition

    International Nuclear Information System (INIS)

    Jepps, Owen G; Rondoni, Lamberto

    2010-01-01

    Deterministic 'thermostats' are mathematical tools used to model nonequilibrium steady states of fluids. The resulting dynamical systems correctly represent the transport properties of these fluids and are easily simulated on modern computers. More recently, the connection between such thermostats and entropy production has been exploited in the development of nonequilibrium fluid theories. The purpose and limitations of deterministic thermostats are discussed in the context of irreversible thermodynamics and the development of theories of nonequilibrium phenomena. We draw parallels between the development of such nonequilibrium theories and the development of notions of ergodicity in equilibrium theories. (topical review)

  1. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  2. Deterministic linear-optics quantum computing based on a hybrid approach

    International Nuclear Information System (INIS)

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-01-01

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources

  3. Deterministic linear-optics quantum computing based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  4. Advanced in-situ electron-beam lithography for deterministic nanophotonic device processing

    Energy Technology Data Exchange (ETDEWEB)

    Kaganskiy, Arsenty; Gschrey, Manuel; Schlehahn, Alexander; Schmidt, Ronny; Schulze, Jan-Hindrik; Heindel, Tobias; Rodt, Sven, E-mail: srodt@physik.tu-berlin.de; Reitzenstein, Stephan [Institut für Festkörperphysik, Technische Universität Berlin, Hardenbergstraße 36, D-10623 Berlin (Germany); Strittmatter, André [Institut für Festkörperphysik, Technische Universität Berlin, Hardenbergstraße 36, D-10623 Berlin (Germany); Otto-von-Guericke Universität Magdeburg, Universitätsplatz 2, D-39106 Magdeburg (Germany)

    2015-07-15

    We report on an advanced in-situ electron-beam lithography technique based on high-resolution cathodoluminescence (CL) spectroscopy at low temperatures. The technique has been developed for the deterministic fabrication and quantitative evaluation of nanophotonic structures. It is of particular interest for the realization and optimization of non-classical light sources which require the pre-selection of single quantum dots (QDs) with very specific emission features. The two-step electron-beam lithography process comprises (a) the detailed optical study and selection of target QDs by means of CL-spectroscopy and (b) the precise retrieval of the locations and integration of target QDs into lithographically defined nanostructures. Our technology platform allows for a detailed pre-process determination of important optical and quantum optical properties of the QDs, such as the emission energies of excitonic complexes, the excitonic fine-structure splitting, the carrier dynamics, and the quantum nature of emission. In addition, it enables a direct and precise comparison of the optical properties of a single QD before and after integration which is very beneficial for the quantitative evaluation of cavity-enhanced quantum devices.

  5. Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?

    Science.gov (United States)

    Plag, H.; Earnest, D.; Jules-Plag, S.

    2013-12-01

    suggests an intriguing hypothesis: disaster risk reduction programs need to account for whether they also facilitate the public trust, cooperation, and communication needed to recover from a disaster. Our work in the Hampton Roads area, where the probability of hazardous flooding and inundation events exceeding the thresholds of the infrastructure is high, suggests that to facilitate the paradigm shift from the deterministic to a probabilistic approach, natural sciences have to focus on hazard probabilities, while engineering and social sciences have to work together to understand how interactions of the built and social environments impact robustness and resilience. The current science-policy relationship needs to be augmented by social structures that can learn from previous unexpected events. In this response to climate change, science does not have the primary goal to reduce uncertainties and prediction errors, but rather to develop processes that can utilize uncertainties and surprises to increase robustness, strengthen resilience, and reduce fragility of the social systems during times when infrastructure fails.

  6. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  7. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  8. Convergence studies of deterministic methods for LWR explicit reflector methodology

    International Nuclear Information System (INIS)

    Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.

    2013-01-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)

  9. Progress in nuclear well logging modeling using deterministic transport codes

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.

    2002-01-01

    Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)

  10. Design of deterministic OS for SPLC

    International Nuclear Information System (INIS)

    Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop

    2012-01-01

    Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events

  11. ZERODUR: deterministic approach for strength design

    Science.gov (United States)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two

  12. Transmission power control in WSNs : from deterministic to cognitive methods

    NARCIS (Netherlands)

    Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.

    2018-01-01

    Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms

  13. Diffusion in Deterministic Interacting Lattice Systems

    Science.gov (United States)

    Medenjak, Marko; Klobas, Katja; Prosen, Tomaž

    2017-09-01

    We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.

  14. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  15. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  16. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    Science.gov (United States)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  17. Deterministic ripple-spreading model for complex networks.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  18. Langevin equation with the deterministic algebraically correlated noise

    Energy Technology Data Exchange (ETDEWEB)

    Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)

    1995-12-31

    Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.

  19. Langevin equation with the deterministic algebraically correlated noise

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Srokowski, T.

    1995-01-01

    Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)

  20. Chaotic transitions in deterministic and stochastic dynamical systems applications of Melnikov processes in engineering, physics, and neuroscience

    CERN Document Server

    Simiu, Emil

    2002-01-01

    The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...

  1. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    International Nuclear Information System (INIS)

    Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad

    2016-01-01

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  2. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.

    2016-12-15

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  3. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A.F.; Roussin, R.W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  4. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A. F.; Roussin, R. W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  5. Beeping a Deterministic Time-Optimal Leader Election

    OpenAIRE

    Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy

    2018-01-01

    The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...

  6. Bayesian deterministic decision making: A normative account of the operant matching law and heavy-tailed reward history dependency of choices

    Directory of Open Access Journals (Sweden)

    Hiroshi eSaito

    2014-03-01

    Full Text Available The decision making behaviors of humans and animals adapt and then satisfy an ``operant matching law'' in certain type of tasks. This was first pointed out by Herrnstein in his foraging experiments on pigeons. The matching law has been one landmark for elucidating the underlying processes of decision making and its learning in the brain. An interesting question is whether decisions are made deterministically or probabilistically. Conventional learning models of the matching law are based on the latter idea; they assume that subjects learn choice probabilities of respective alternatives and decide stochastically with the probabilities. However, it is unknown whether the matching law can be accounted for by a deterministic strategy or not. To answer this question, we propose several deterministic Bayesian decision making models that have certain incorrect beliefs about an environment. We claim that a simple model produces behavior satisfying the matching law in static settings of a foraging task but not in dynamic settings. We found that the model that has a belief that the environment is volatile works well in the dynamic foraging task and exhibits undermatching, which is a slight deviation from the matching law observed in many experiments. This model also demonstrates the double-exponential reward history dependency of a choice and a heavier-tailed run-length distribution, as has recently been reported in experiments on monkeys.

  7. A deterministic combination of numerical and physical models for coastal waves

    DEFF Research Database (Denmark)

    Zhang, Haiwen

    2006-01-01

    of numerical and physical modelling hence provides an attractive alternative to the use of either tool on it's own. The goal of this project has been to develop a deterministically combined numerical/physical model where the physical wave tank is enclosed in a much larger computational domain, and the two......Numerical and physical modelling are the two main tools available for predicting the influence of water waves on coastlines and structures placed in the near-shore environment. Numerical models can cover large areas at the correct scale, but are limited in their ability to capture strong...... nonlinearities, wave breaking, splash, mixing, and other such complicated physics. Physical models naturally include the real physics (at the model scale), but are limited by the physical size of the facility and must contend with the fact that different physical effects scale differently. An integrated use...

  8. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we

  9. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    2015-01-01

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......–Carter model will otherwise overestimate the reduction of mortality for the younger age groups and will underestimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee–Carter model instead of a one-factor model should be formulated as a two- (or several...

  10. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    Science.gov (United States)

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.; Severino, F.

    2011-01-01

    A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.

  12. Thermal behavior of spatial structures under solar irradiation

    International Nuclear Information System (INIS)

    Liu, Hongbo; Liao, Xiangwei; Chen, Zhihua; Zhang, Qian

    2015-01-01

    The temperature, particularly the non-uniform temperature under solar irradiation, is the main load for large-span steel structures. Due the shortage of in-site temperature test in previous studies, an in-site test was conducted on the large-span steel structures under solar irradiation, which was covered by glass roof and light roof, to gain insight into the temperature distribution of steel members under glass roof or light roof. A numerical method also was presented and verified to forecast the temperature of steel member under glass roof or light roof. Based on the on-site measurement and numerical analyses conducted, the following conclusions were obtained: 1) a remarkable temperature difference exists between the steel member under glass roof and that under light roof, 2) solar irradiation has a significant effect on the temperature distribution and thermal behavior of large-span spatial structures, 3) negative thermal load is the controlling factor for member stress, and the positive thermal load is the controlling factor for nodal displacement. - Highlights: • Temperature was measured for a steel structures under glass roof and light roof. • Temperature simulation method was presented and verified. • The thermal behavior of steel structures under glass or light roof was presented

  13. Deterministic quantum state transfer and remote entanglement using microwave photons.

    Science.gov (United States)

    Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A

    2018-06-01

    Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.

  14. A deterministic seismic hazard map of India and adjacent areas

    International Nuclear Information System (INIS)

    Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano

    2001-09-01

    A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)

  15. Deterministic transfer of two-dimensional materials by all-dry viscoelastic stamping

    International Nuclear Information System (INIS)

    Castellanos-Gomez, Andres; Buscema, Michele; Molenaar, Rianda; Singh, Vibhor; Janssen, Laurens; Van der Zant, Herre S J; Steele, Gary A

    2014-01-01

    The deterministic transfer of two-dimensional crystals constitutes a crucial step towards the fabrication of heterostructures based on the artificial stacking of two-dimensional materials. Moreover, controlling the positioning of two-dimensional crystals facilitates their integration in complex devices, which enables the exploration of novel applications and the discovery of new phenomena in these materials. To date, deterministic transfer methods rely on the use of sacrificial polymer layers and wet chemistry to some extent. Here, we develop an all-dry transfer method that relies on viscoelastic stamps and does not employ any wet chemistry step. This is found to be very advantageous to freely suspend these materials as there are no capillary forces involved in the process. Moreover, the whole fabrication process is quick, efficient, clean and it can be performed with high yield. (letter)

  16. Deterministic and probabilistic approach to safety analysis

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1980-01-01

    The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)

  17. A Deterministic Approach to the Synchronization of Cellular Automata

    OpenAIRE

    Garcia, J.; Garcia, P.

    2011-01-01

    In this work we introduce a deterministic scheme of synchronization of linear and nonlinear cellular automata (CA) with complex behavior, connected through a master-slave coupling. By using a definition of Boolean derivative, we use the linear approximation of the automata to determine a function of coupling that promotes synchronization without perturbing all the sites of the slave system.

  18. Deterministic Electrical Charge-State Initialization of Single Nitrogen-Vacancy Center in Diamond

    Directory of Open Access Journals (Sweden)

    Y. Doi

    2014-03-01

    Full Text Available Apart from applications in classical information-processing devices, the electrical control of atomic defects in solids at room temperature will have a tremendous impact on quantum devices that are based on such defects. In this study, we demonstrate the electrical manipulation of individual prominent representatives of such atomic solid-state defects, namely, the negative charge state of single nitrogen-vacancy defect centers (NV^{−} in diamond. We experimentally demonstrate, deterministic, purely electrical charge-state initialization of individual NV centers. The NV centers are placed in the intrinsic region of a p-i-n diode structure that facilitates the delivery of charge carriers to the defect for charge-state switching. The charge-state dynamics of a single NV center were investigated by time-resolved measurements and a nondestructive single-shot readout of the charge state. Fast charge-state switching rates (from negative to neutrally charged defects, which are greater than 0.72 ± 0.10  μs^{−1}, were realized. Furthermore, in no-operation mode, the realized charge states were stable for presumably much more than 0.45 s. We believe that the results obtained are useful not only for ultrafast electrical control of qubits, long T_{2} quantum memory, and quantum sensors associated with single NV centers but also for classical memory devices based on single atomic storage bits working under ambient conditions.

  19. Deterministic Greedy Routing with Guaranteed Delivery in 3D Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Su Xia

    2014-05-01

    Full Text Available With both computational complexity and storage space bounded by a small constant, greedy routing is recognized as an appealing approach to support scalable routing in wireless sensor networks. However, significant challenges have been encountered in extending greedy routing from 2D to 3D space. In this research, we develop decentralized solutions to achieve greedy routing in 3D sensor networks. Our proposed approach is based on a unit tetrahedron cell (UTC mesh structure. We propose a distributed algorithm to realize volumetric harmonic mapping (VHM of the UTC mesh under spherical boundary condition. It is a one-to-one map that yields virtual coordinates for each node in the network without or with one internal hole. Since a boundary has been mapped to a sphere, node-based greedy routing is always successful thereon. At the same time, we exploit the UTC mesh to develop a face-based greedy routing algorithm and prove its success at internal nodes. To deliver a data packet to its destination, face-based and node-based greedy routing algorithms are employed alternately at internal and boundary UTCs, respectively. For networks with multiple internal holes, a segmentation and tunnel-based routing strategy is proposed on top of VHM to support global end-to-end routing. As far as we know, this is the first work that realizes truly deterministic routing with constant-bounded storage and computation in general 3D wireless sensor networks.

  20. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  1. Characterisation of dispersion mechanisms in an urban catchment using a deterministic spatially distributed direct hydrograph travel time model

    Science.gov (United States)

    Rossel, F.; Gironas, J. A.

    2012-12-01

    The link between stream network structure and hydrologic response for natural basins has been extensively studied. It is well known that stream network organization and flow dynamics in the reaches combine to shape the hydrologic response of natural basins. Geomorphologic dispersion and hydrodynamic dispersion along with hillslope processes control to a large extent the overall variance of the hydrograph, particularly under the assumption of constant celerity throughout the basin. In addition, a third mechanism referred as to kinematic dispersion becomes relevant when considering spatial variations of celerity. On contrary, the link between the drainage network structure and overall urban terrain, and the hydrologic response in urban catchments has been much less studied. In particular, the characterization of the different dispersion mechanisms within urban areas remains to be better understood. In such areas artificial elements are expected to contribute to the total dispersion due to the variety of geometries and the spatial distribution of imperviousness. This work quantifies the different dispersion mechanisms in an urban catchment, focusing on their relevance and the spatial scales involved. For this purpose we use the Urban Morpho-climatic Instantaneous Unit Hydrograph model, a deterministic spatially distributed direct hydrograph travel time model, which computes travel times in hillslope, pipe, street and channel cells using formulations derived from kinematic wave theory. The model was applied to the Aubeniere catchment, located in Nantes, France. Unlike stochastic models, this deterministic model allows the quantification of dispersion mechanism at the local scale (i.e. the grid-cell). We found that kinematic dispersion is more relevant for small storm events, whereas geomorphologic dispersion becomes more significant for larger storms, as the mean celerity within the catchment increases. In addition, the total dispersion relates to the drainage area in

  2. Bi and tri-objective optimization in the deterministic network interdiction problem

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Emmanuel Ramirez-Marquez, Jose; Salazar A, Daniel E.

    2010-01-01

    Solution approaches to the deterministic network interdiction problem have previously been developed for optimizing a single figure-of-merit of the network configuration (i.e. flow that can be transmitted between a source node and a sink node for a fixed network design) under constraints related to limited amount of resources available to interdict network links. These approaches work under the assumption that: (1) nominal capacity of each link is completely reduced when interdicted and (2) there is a single criterion to optimize. This paper presents a newly developed evolutionary algorithm that for the first time allows solving multi-objective optimization models for the design of network interdiction strategies that take into account a variety of figures-of-merit. The algorithm provides an approximation to the optimal Pareto frontier using: (a) techniques in Monte Carlo simulation to generate potential network interdiction strategies, (b) graph theory to analyze strategies' maximum source-sink flow and (c) an evolutionary search that is driven by the probability that a link will belong to the optimal Pareto set. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate and validate the approach.

  3. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  4. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  5. Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations

    Science.gov (United States)

    Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael

    2012-02-01

    We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations

  6. Probabilistic and deterministic risk assessment for extreme objects and ecologically hazardous systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Veryuzhsky

    2003-06-01

    Full Text Available The paper include mostly the results of works of the Research Institute for Mechanics of Quickproceeding Processes united in a general research direction - creation of the methodology for risk assessment and risk management for ecologically hazardous systems, consisting of the set of different technological analyzed objects. The elements of system can be characterized by high level of radiation, toxic, explosion, fire and other hazards. The probalistic and deterministic approach for risk assessment, based on mathematical methods of system analysis, non-liner dynamics and computer simulation, has been developed. Branching in problem definition, as well as diversity of factor and criteria for determination of system status, is also taken into account. The risks caused by both objective and subjective factors (including human factor are examined. In many performed studies, the leading structural element, dominating in determination of the system safety, is the structural part of an object. The methodology is implemented for the safety analysis (risk assessment for Chernobyl NPP Shelton Object and other industrial buildings

  7. Towards the certification of non-deterministic control systems for safety-critical applications: analysing aviation analogies for possible certification strategies

    CSIR Research Space (South Africa)

    Burger, CR

    2011-11-01

    Full Text Available Current certification criteria for safety-critical systems exclude non-deterministic control systems. This paper investigates the feasibility of using human-like monitoring strategies to achieve safe non-deterministic control using multiple...

  8. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  9. Siting criteria based on the prevention of deterministic effects from plutonium inhalation exposures

    International Nuclear Information System (INIS)

    Sorensen, S.A.; Low, J.O.

    1998-01-01

    Siting criteria are established by regulatory authorities to evaluate potential accident scenarios associated with proposed nuclear facilities. The 0.25 Sv (25 rem) siting criteria adopted in the United States has been historically based on the prevention of deterministic effects from acute, whole-body exposures. The Department of Energy has extended the applicability of this criterion to radionuclides that deliver chronic, organ-specific irradiation through the specification of a 0.25 Sv (25 rem) committed effective dose equivalent siting criterion. A methodology is developed to determine siting criteria based on the prevention of deterministic effects from inhalation intakes of radionuclides which deliver chronic, organ-specific irradiation. Revised siting criteria, expressed in terms of committed effective dose equivalent, are proposed for nuclear facilities that handle primarily plutonium compounds. The analysis determined that a siting criterion of 1.2 Sv (120 rem) committed effective dose equivalent for inhalation exposures to weapons-grade plutonium meets the historical goal of preventing deterministic effects during a facility accident scenario. The criterion also meets the Nuclear Regulatory Commission and Department of Energy Nuclear Safety Goals provided that the frequency of the accident is sufficiently low

  10. Quantitative diffusion tensor deterministic and probabilistic fiber tractography in relapsing-remitting multiple sclerosis

    International Nuclear Information System (INIS)

    Hu Bing; Ye Binbin; Yang Yang; Zhu Kangshun; Kang Zhuang; Kuang Sichi; Luo Lin; Shan Hong

    2011-01-01

    Purpose: Our aim was to study the quantitative fiber tractography variations and patterns in patients with relapsing-remitting multiple sclerosis (RRMS) and to assess the correlation between quantitative fiber tractography and Expanded Disability Status Scale (EDSS). Material and methods: Twenty-eight patients with RRMS and 28 age-matched healthy volunteers underwent a diffusion tensor MR imaging study. Quantitative deterministic and probabilistic fiber tractography were generated in all subjects. And mean numbers of tracked lines and fiber density were counted. Paired-samples t tests were used to compare tracked lines and fiber density in RRMS patients with those in controls. Bivariate linear regression model was used to determine the relationship between quantitative fiber tractography and EDSS in RRMS. Results: Both deterministic and probabilistic tractography's tracked lines and fiber density in RRMS patients were less than those in controls (P < .001). Both deterministic and probabilistic tractography's tracked lines and fiber density were found negative correlations with EDSS in RRMS (P < .001). The fiber tract disruptions and reductions in RRMS were directly visualized on fiber tractography. Conclusion: Changes of white matter tracts can be detected by quantitative diffusion tensor fiber tractography, and correlate with clinical impairment in RRMS.

  11. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Deterministic Design Optimization of Structures in OpenMDAO Framework

    Science.gov (United States)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  13. System-Enforced Deterministic Streaming for Efficient Pipeline Parallelism

    Institute of Scientific and Technical Information of China (English)

    张昱; 李兆鹏; 曹慧芳

    2015-01-01

    Pipeline parallelism is a popular parallel programming pattern for emerging applications. However, program-ming pipelines directly on conventional multithreaded shared memory is difficult and error-prone. We present DStream, a C library that provides high-level abstractions of deterministic threads and streams for simply representing pipeline stage work-ers and their communications. The deterministic stream is established atop our proposed single-producer/multi-consumer (SPMC) virtual memory, which integrates synchronization with the virtual memory model to enforce determinism on shared memory accesses. We investigate various strategies on how to efficiently implement DStream atop the SPMC memory, so that an infinite sequence of data items can be asynchronously published (fixed) and asynchronously consumed in order among adjacent stage workers. We have successfully transformed two representative pipeline applications – ferret and dedup using DStream, and conclude conversion rules. An empirical evaluation shows that the converted ferret performed on par with its Pthreads and TBB counterparts in term of running time, while the converted dedup is close to 2.56X, 7.05X faster than the Pthreads counterpart and 1.06X, 3.9X faster than the TBB counterpart on 16 and 32 CPUs, respectively.

  14. Some thoughts on the future of probabilistic structural design of nuclear components

    International Nuclear Information System (INIS)

    Stancampiano, P.A.

    1978-01-01

    This paper presents some views on the future role of probabilistic methods in the structural design of nuclear components. The existing deterministic design approach is discussed and compared to the probabilistic approach. Some of the objections to both deterministic and probabilistic design are listed. Extensive research and development activities are required to mature the probabilistic approach suficiently to make it cost-effective and competitive with current deterministic design practices. The required research activities deal with probabilistic methods development, more realistic casual failure mode models development, and statistical data models development. A quasi-probabilistic structural design approach is recommended which accounts for the random error in the design models. (Auth.)

  15. Electronic structure of Ca, Sr, and Ba under pressure.

    Science.gov (United States)

    Animalu, A. O. E.; Heine, V.; Vasvari, B.

    1967-01-01

    Electronic band structure calculations phase of Ca, Sr and Ba over wide range of atomic volumes under pressure electronic band structure calculations for fcc phase of Ca, Sr and Ba over wide range of atomic volumes under pressure electronic band structure calculations for fcc phase of Ca, Sr and Ba over wide range of atomic volumes under pressure

  16. Deterministic computation of functional integrals

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  17. Testing the robustness of deterministic models of optimal dynamic pricing and lot-sizing for deteriorating items under stochastic conditions

    DEFF Research Database (Denmark)

    Ghoreishi, Maryam

    2018-01-01

    Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....

  18. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    Science.gov (United States)

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  19. Acoustic transmittance of an aperiodic deterministic multilayer structure

    International Nuclear Information System (INIS)

    Madrigal-Melchor, J; Enciso-Muñoz, A; Contreras-Solorio, D A

    2013-01-01

    We study theoretically the acoustic transmission for a multilayer structure where the characteristic acoustic impedance follows the values generated by the self-similar sequence called the 1 s counting sequence . The transmission spectrum shows clearly self-similarity characteristics.

  20. Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities

    NARCIS (Netherlands)

    Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.

    2005-01-01

    Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,

  1. Probabilistic approach in treatment of deterministic analyses results of severe accidents

    International Nuclear Information System (INIS)

    Krajnc, B.; Mavko, B.

    1996-01-01

    Severe accidents sequences resulting in loss of the core geometric integrity have been found to have small probability of the occurrence. Because of their potential consequences to public health and safety, an evaluation of the core degradation progression and the resulting effects on the containment is necessary to determine the probability of a significant release of radioactive materials. This requires assessment of many interrelated phenomena including: steel and zircaloy oxidation, steam spikes, in-vessel debris cooling, potential vessel failure mechanisms, release of core material to the containment, containment pressurization from steam generation, or generation of non-condensable gases or hydrogen burn, and ultimately coolability of degraded core material. To asses the answer from the containment event trees in the sense of weather certain phenomenological event would happen or not the plant specific deterministic analyses should be performed. Due to the fact that there is a large uncertainty in the prediction of severe accidents phenomena in Level 2 analyses (containment event trees) the combination of probabilistic and deterministic approach should be used. In fact the result of the deterministic analyses of severe accidents are treated in probabilistic manner due to large uncertainty of results as a consequence of a lack of detailed knowledge. This paper discusses approach used in many IPEs, and which assures that the assigned probability for certain question in the event tree represent the probability that the event will or will not happen and that this probability also includes its uncertainty, which is mainly result of lack of knowledge. (author)

  2. Deterministic ion beam material adding technology for high-precision optical surfaces.

    Science.gov (United States)

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  3. Deterministic dense coding and faithful teleportation with multipartite graph states

    International Nuclear Information System (INIS)

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-01-01

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  4. Seismic zonation of Bucharest by using a deterministic approach of numerical modeling

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Panza, G.F.; Cioflan, C.; Radulian, M.; Marmureanu, Gh.

    2002-01-01

    Bucharest city represents the largest European center (about 2 million inhabitants and 230 km 2 constructed area) periodically subjected to the strong intermediate-depth earthquakes originating in Vrancea region. The statistics indicate a recurrence interval of 25 years for M w ≥7.0 Vrancea events and a significant earthquake hazard for the city location with a 50% chance for an event of M w >7.6 every 50 years. The strongest Vrancea events of the last century occurred in 1908 (M w =7.1), 1940 (M w =7.7), 1977 (M w =7.4) and 1986 (M w =7.1) and inflicted heavy damage and casualties in Bucharest. Under these circumstances, the ground motion evaluation for the city area represents an essential step toward the mitigation of the local seismic risk. This paper presents the new insights coming from direct instrumental observation and interpretation of the local effects as well as realistic numerical modeling that update and improve the input data necessary for a detailed microzoning map of the Romanian capital. Our results show that the synthetic local hazard distribution we obtain with the deterministic approach supplies a realistic estimation of the seismic input, highly sensitive not only to the local conditions, but also to the source and the path structure parameters. The complex hybrid method we use offers the chance to merge the different specific accumulated information in reasonably well constrained scenarios for a level C realistic microzonation of Bucharest area to be use to mitigate the effects of future strong events originating in Vrancea region. (authors)

  5. Automatic design of deterministic and non-halting membrane systems by tuning syntactical ingredients.

    Science.gov (United States)

    Zhang, Gexiang; Rong, Haina; Ou, Zhu; Pérez-Jiménez, Mario J; Gheorghe, Marian

    2014-09-01

    To solve the programmability issue of membrane computing models, the automatic design of membrane systems is a newly initiated and promising research direction. In this paper, we propose an automatic design method, Permutation Penalty Genetic Algorithm (PPGA), for a deterministic and non-halting membrane system by tuning membrane structures, initial objects and evolution rules. The main ideas of PPGA are the introduction of the permutation encoding technique for a membrane system, a penalty function evaluation approach for a candidate membrane system and a genetic algorithm for evolving a population of membrane systems toward a successful one fulfilling a given computational task. Experimental results show that PPGA can successfully accomplish the automatic design of a cell-like membrane system for computing the square of n ( n ≥ 1 is a natural number) and can find the minimal membrane systems with respect to their membrane structures, alphabet, initial objects, and evolution rules for fulfilling the given task. We also provide the guidelines on how to set the parameters of PPGA.

  6. Deterministic Generation of Quantum State Transfer Between Spatially Separated Single Molecule Magnets

    International Nuclear Information System (INIS)

    Song Peijun; Lue Xinyou; Huang Pei; Hao Xiangying; Yang Xiaoxue

    2010-01-01

    We propose a new scheme for realizing deterministic quantum state transfer (QST) between two spatially separated single molecule magnets (SMMs) with the framework of cavity quantum electrodynamics (QED). In the present scheme, two SMMs are trapped in two spatially separated optical cavities coupled by an optical fiber. Through strictly numerically simulating, we demonstrate that our scheme is robust with respect to the SMMs' spontaneous decay and fiber loss under the conditions of dispersive SMMs-field interaction and strong coupling of cavity fiber. In addition, we also discuss the influence of photon leakage out of cavities and show that our proposal is good enough to demonstrate the generation of QST with high fidelity utilizing the current experimental technology. The present investigation provides research opportunities for realizing QST between solid-state qubits and may result in a substantial impact on the progress of solid-state-based quantum communications network. (general)

  7. Severe deterministic effects of external exposure and intake of radioactive material: basis for emergency response criteria

    International Nuclear Information System (INIS)

    Kutkov, V; Buglova, E; McKenna, T

    2011-01-01

    Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.

  8. Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation

    CERN Document Server

    Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F

    2002-01-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...

  9. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  10. Invasion fitness for gene-culture co-evolution in family-structured populations and an application to cumulative culture under vertical transmission.

    Science.gov (United States)

    Mullon, Charles; Lehmann, Laurent

    2017-08-01

    Human evolution depends on the co-evolution between genetically determined behaviors and socially transmitted information. Although vertical transmission of cultural information from parent to offspring is common in hominins, its effects on cumulative cultural evolution are not fully understood. Here, we investigate gene-culture co-evolution in a family-structured population by studying the invasion fitness of a mutant allele that influences a deterministic level of cultural information (e.g., amount of knowledge or skill) to which diploid carriers of the mutant are exposed in subsequent generations. We show that the selection gradient on such a mutant, and the concomitant level of cultural information it generates, can be evaluated analytically under the assumption that the cultural dynamic has a single attractor point, thereby making gene-culture co-evolution in family-structured populations with multigenerational effects mathematically tractable. We apply our result to study how genetically determined phenotypes of individual and social learning co-evolve with the level of adaptive information they generate under vertical transmission. We find that vertical transmission increases adaptive information due to kin selection effects, but when information is transmitted as efficiently between family members as between unrelated individuals, this increase is moderate in diploids. By contrast, we show that the way resource allocation into learning trades off with allocation into reproduction (the "learning-reproduction trade-off") significantly influences levels of adaptive information. We also show that vertical transmission prevents evolutionary branching and may therefore play a qualitative role in gene-culture co-evolutionary dynamics. More generally, our analysis of selection suggests that vertical transmission can significantly increase levels of adaptive information under the biologically plausible condition that information transmission between relatives is

  11. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  12. Using MCBEND for neutron or gamma-ray deterministic calculations

    Science.gov (United States)

    Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith

    2017-09-01

    MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  13. Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics

    International Nuclear Information System (INIS)

    Maggs, J E; Morales, G J

    2012-01-01

    Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)

  14. 2D deterministic radiation transport with the discontinuous finite element method

    International Nuclear Information System (INIS)

    Kershaw, D.; Harte, J.

    1993-01-01

    This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method

  15. Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code

    International Nuclear Information System (INIS)

    Misfeldt, I.

    1980-01-01

    A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)

  16. Evaluation of Deterministic and Stochastic Components of Traffic Counts

    Directory of Open Access Journals (Sweden)

    Ivan Bošnjak

    2012-10-01

    Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.

  17. Human Resources Readiness as TSO for Deterministic Safety Analysis on the First NPP in Indonesia

    International Nuclear Information System (INIS)

    Sony Tjahyani, D. T.

    2010-01-01

    In government regulation no. 43 year 2006 it is mentioned that preliminary safety analysis report and final safety analysis report are one of requirements which should be applied in construction and operation licensing for commercial power reactor (NPPs). The purpose of safety analysis report is to confirm the adequacy and efficiency of provisions within the defence in depth of nuclear reactor. Deterministic analysis is used on the safety analysis report. One of the TSO task is to evaluate this report based on request of operator or regulatory body. This paper discusses about human resources readiness as TSO for deterministic safety analysis on the first NPP in Indonesia. The assessment is done by comparing the analysis step on SS-23 and SS-30 with human resources status of BATAN currently. The assessment results showed that human resources for deterministic safety analysis are ready as TSO especially to review preliminary safety analysis report and to revise final safety analysis report in licensing on the first NPP in Indonesia. Otherwise, to prepare the safety analysis report is still needed many competency human resources. (author)

  18. About the Possibility of Creation of a Deterministic Unified Mechanics

    International Nuclear Information System (INIS)

    Khomyakov, G.K.

    2005-01-01

    The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered

  19. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  20. Deterministic Single-Photon Source for Distributed Quantum Networking

    International Nuclear Information System (INIS)

    Kuhn, Axel; Hennrich, Markus; Rempe, Gerhard

    2002-01-01

    A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing

  1. The deterministic optical alignment of the HERMES spectrograph

    Science.gov (United States)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  2. Development and application of a deterministic-realistic hybrid methodology for LOCA licensing analysis

    International Nuclear Information System (INIS)

    Liang, Thomas K.S.; Chou, Ling-Yao; Zhang, Zhongwei; Hsueh, Hsiang-Yu; Lee, Min

    2011-01-01

    Highlights: → A new LOCA licensing methodology (DRHM, deterministic-realistic hybrid methodology) was developed. → DRHM involves conservative Appendix K physical models and statistical treatment of plant status uncertainties. → DRHM can generate 50-100 K PCT margin as compared to a traditional Appendix K methodology. - Abstract: It is well recognized that a realistic LOCA analysis with uncertainty quantification can generate greater safety margin as compared with classical conservative LOCA analysis using Appendix K evaluation models. The associated margin can be more than 200 K. To quantify uncertainty in BELOCA analysis, generally there are two kinds of uncertainties required to be identified and quantified, which involve model uncertainties and plant status uncertainties. Particularly, it will take huge effort to systematically quantify individual model uncertainty of a best estimate LOCA code, such as RELAP5 and TRAC. Instead of applying a full ranged BELOCA methodology to cover both model and plant status uncertainties, a deterministic-realistic hybrid methodology (DRHM) was developed to support LOCA licensing analysis. Regarding the DRHM methodology, Appendix K deterministic evaluation models are adopted to ensure model conservatism, while CSAU methodology is applied to quantify the effect of plant status uncertainty on PCT calculation. Generally, DRHM methodology can generate about 80-100 K margin on PCT as compared to Appendix K bounding state LOCA analysis.

  3. Primality deterministic and primality probabilistic tests

    Directory of Open Access Journals (Sweden)

    Alfredo Rizzi

    2007-10-01

    Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.

  4. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  5. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    Science.gov (United States)

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  6. A combined deterministic and probabilistic procedure for safety assessment of components with cracks - Handbook.

    Energy Technology Data Exchange (ETDEWEB)

    Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))

    2008-07-01

    SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants

  7. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  8. Response-surface models for deterministic effects of localized irradiation of the skin by discrete {beta}/{gamma} -emitting sources

    Energy Technology Data Exchange (ETDEWEB)

    Scott, B.R.

    1995-12-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete {Beta}- and {gamma}-emitting ({Beta}{gamma}E) sources (e.g., {Beta}{gamma}E hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot {Beta}{gamma}E particles are {sup 60}Co- or nuclear fuel-derived particles with diameters > 10 {mu}m and < 3 mm and contain at least 3.7 kBq (0.1 {mu}Ci) of radioactivity. For such {Beta}{gamma}E sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for evaluating the risk of deterministic effects of localized {Beta} irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete {Beta}{gamma}E sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to {Beta} radiation from {Beta}{gamma}E sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects.

  9. Response-surface models for deterministic effects of localized irradiation of the skin by discrete β/γ -emitting sources

    International Nuclear Information System (INIS)

    Scott, B.R.

    1995-01-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete Β- and γ-emitting (ΒγE) sources (e.g., ΒγE hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot ΒγE particles are 60 Co- or nuclear fuel-derived particles with diameters > 10 μm and < 3 mm and contain at least 3.7 kBq (0.1 μCi) of radioactivity. For such ΒγE sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for evaluating the risk of deterministic effects of localized Β irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete ΒγE sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to Β radiation from ΒγE sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects

  10. Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques

    Science.gov (United States)

    Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.

    2017-11-01

    Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.

  11. Electronic structure and optical properties of AIN under high pressure

    International Nuclear Information System (INIS)

    Li Zetao; Dang Suihu; Li Chunxia

    2011-01-01

    We have calculated the electronic structure and optical properties of Wurtzite structure AIN under different high pressure with generalized gradient approximation (GGA) in this paper. The total energy, density of state, energy band structure and optical absorption and reflection properties under high pressure are calculated. By comparing the changes of the energy band structure, we obtained AIN phase transition pressure for 16.7 GPa, which is a direct band structure transforming to an indirect band structure. Meanwhile, according to the density of states distribution and energy band structure, we analyzed the optical properties of AIN under high-pressure, the results showed that the absorption spectra moved from low-energy to high-energy. (authors)

  12. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  13. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  14. Using the deterministic factor systems in the analysis of return on ...

    African Journals Online (AJOL)

    Using the deterministic factor systems in the analysis of return on equity. ... or equal the profitability of bank deposits, the business of the organization is not efficient. ... Application of quantitative and qualitative indicators in the analysis allows to ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.

  15. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  16. A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT

    International Nuclear Information System (INIS)

    S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS

    1998-01-01

    A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems

  17. Using MCBEND for neutron or gamma-ray deterministic calculations

    Directory of Open Access Journals (Sweden)

    Geoff Dobson

    2017-01-01

    Full Text Available MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  18. Application of deterministic and probabilistic methods in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2007-01-01

    The economic equipment replacement problem is one of the oldest questions in Production Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost, etc. New equipment, however, require a higher initial investment and thus a higher opportunity cost, and impose special training of the labor force. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs but in contrast having lower financial, insurance, and opportunity costs. The weighting of all these costs can be made with the various methods presented. The aim of this paper is to discuss deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. (author)

  19. Implicational Scaling of Reading Comprehension Construct: Is it Deterministic or Probabilistic?

    Directory of Open Access Journals (Sweden)

    Parisa Daftarifard

    2016-05-01

    In English as a Second Language Teaching and Testing situations, it is common to infer about learners’ reading ability based on his or her total score on a reading test. This assumes the unidimensional and reproducible nature of reading items. However, few researches have been conducted to probe the issue through psychometric analyses. In the present study, the IELTS exemplar module C (1994 was administered to 503 Iranian students of various reading comprehension ability levels. Both the deterministic and probabilistic psychometric models of unidimensionality were employed to examine the plausible existence of implicational scaling among reading items in the mentioned reading test. Based on the results, it was concluded that the reading data in this study did not show a deterministic unidimensional scale (Guttman scaling; rather, it revealed a probabilistic one (Rasch model. As the person map of the measures failed to show a meaningful hierarchical order for the items, these results call into question the assumption of implicational scaling that is normally practiced in scoring reading items.

  20. Optimal power flow: a bibliographic survey I. Formulations and deterministic methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  1. Deterministic SLIR model for tuberculosis disease mapping

    Science.gov (United States)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  2. Deterministic calculations of radiation doses from brachytherapy seeds

    International Nuclear Information System (INIS)

    Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos

    2009-01-01

    Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)

  3. Table-top deterministic and collective colloidal assembly using videoprojector lithography

    International Nuclear Information System (INIS)

    Cordeiro, J.; Zelsmann, M.; Honegger, T.; Picard, E.; Hadji, E.; Peyrade, D.

    2015-01-01

    Graphical abstract: - Highlights: • Micrometric resolution substrates are made at low cost using a videoprojector. • Fabricated patterns could be used as substrates for capillary force assembly. • Arrays of organized particles are made using a table-top capillary assembly tool. • This process offers a new bridge between the colloidal domain and the chip world. - Abstract: In the field of micro- and nanotechnology, most lithography and fabrication tools coming from the microelectronic industry are expensive, time-consuming and may need some masks that have to be subcontracted. Such approach is not suitable for other fields that require rapid prototyping such as chemistry, life science or energy and may hinder research creativity. In this work, we present two table-top equipments dedicated to the fabrication of deterministic colloidal particles assemblies onto micro-structured substrates. We show that, with a limited modification of the optics of a standard videoprojector, it is possible to quickly obtain substrates with thousands of micrometric features. Then, we combine these substrates with thermodynamic colloidal assembly and generate arrays of particles without defects. This work opens the way to a simple and table-top fabrication of devices based on colloidal particles

  4. Table-top deterministic and collective colloidal assembly using videoprojector lithography

    Energy Technology Data Exchange (ETDEWEB)

    Cordeiro, J. [Univ Grenoble Alpes, F-38000 Grenoble (France); CNRS, LTM, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38000 Grenoble (France); Zelsmann, M., E-mail: marc.zelsmann@cea.fr [Univ Grenoble Alpes, F-38000 Grenoble (France); CNRS, LTM, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38000 Grenoble (France); Honegger, T. [Univ Grenoble Alpes, F-38000 Grenoble (France); CNRS, LTM, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38000 Grenoble (France); Picard, E.; Hadji, E. [Univ Grenoble Alpes, F-38000 Grenoble (France); CEA, INAC-SP2M, F-38000 Grenoble (France); Peyrade, D. [Univ Grenoble Alpes, F-38000 Grenoble (France); CNRS, LTM, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38000 Grenoble (France)

    2015-09-15

    Graphical abstract: - Highlights: • Micrometric resolution substrates are made at low cost using a videoprojector. • Fabricated patterns could be used as substrates for capillary force assembly. • Arrays of organized particles are made using a table-top capillary assembly tool. • This process offers a new bridge between the colloidal domain and the chip world. - Abstract: In the field of micro- and nanotechnology, most lithography and fabrication tools coming from the microelectronic industry are expensive, time-consuming and may need some masks that have to be subcontracted. Such approach is not suitable for other fields that require rapid prototyping such as chemistry, life science or energy and may hinder research creativity. In this work, we present two table-top equipments dedicated to the fabrication of deterministic colloidal particles assemblies onto micro-structured substrates. We show that, with a limited modification of the optics of a standard videoprojector, it is possible to quickly obtain substrates with thousands of micrometric features. Then, we combine these substrates with thermodynamic colloidal assembly and generate arrays of particles without defects. This work opens the way to a simple and table-top fabrication of devices based on colloidal particles.

  5. A note on limited pushdown alphabets in stateless deterministic pushdown automata

    Czech Academy of Sciences Publication Activity Database

    Masopust, Tomáš

    2013-01-01

    Roč. 24, č. 3 (2013), s. 319-328 ISSN 0129-0541 R&D Projects: GA ČR(CZ) GPP202/11/P028 Institutional support: RVO:67985840 Keywords : deterministic pushdown automata * stateless pushdown automata * realtime pushdown automata Subject RIV: BA - General Mathematics Impact factor: 0.326, year: 2013 http://www.worldscientific.com/doi/abs/10.1142/S0129054113500068

  6. Deterministic Chaos in Radon Time Variation

    International Nuclear Information System (INIS)

    Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.

    2003-01-01

    Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)

  7. Radon time variations and deterministic chaos

    Energy Technology Data Exchange (ETDEWEB)

    Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V

    2004-07-01

    Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere.

  8. Radon time variations and deterministic chaos

    International Nuclear Information System (INIS)

    Planinic, J.; Vukovic, B.; Radolic, V.

    2004-01-01

    Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere

  9. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  10. Evaluation of the risk associated with the storage of radioactive wastes. The deterministic approach

    International Nuclear Information System (INIS)

    Lewi, J.

    1988-07-01

    Radioactive waste storage facility safety depends on a certain number of barriers being placed between the waste and man. These barriers, certain of which are articial (the waste package and engineered barriers) and others are natural (geological formations), are of characteristics suited to the type of storage facility (surface storage or storage in deep geological formations). The combination of these different barriers provide protection for man, under all circumstances considered plausible. Justification, for the storage of given quantities of radionuclides, of the choice of the site, the artificial barriers and the overall storage architecture, is obtained by evaluation of the risk. It being this which provides a basis for determining the acceptability of the storage facility. One of the following two methods is normally used for evaluation of the risk: the deterministic method and the probabilistic method. This adress describes the deterministic method. This method is employed in France for the safety analysis of the projects and works of ANDRA, the national agency responsible for the management of radioactive waste. It should be remembered that in France, the La Manche surface storage centre for low and medium activity waste has been in existence since 1969, close to the reprocessing plant at La Hague and a second surface storage centre is to be commissioned around 1991 at Soulaines in centre of France (departement de l'Aube). Furthermore, geological surveying of four sites located in geological formations consisting of granite, schist, clay and salt were begun in 1987 for the selection in about three years time of a site for the creation of an underground laboratory. This could later be transformed, if safety is demonstrated, into a deep storage centre

  11. Relative importance of deterministic and stochastic processes in driving arbuscular mycorrhizal fungal assemblage during the spreading of a toxic plant.

    Directory of Open Access Journals (Sweden)

    Guoxi Shi

    Full Text Available Both deterministic and stochastic processes are expected to drive the assemblages of arbuscular mycorrhizal (AM fungi, but little is known about the relative importance of these processes during the spreading of toxic plants. Here, the species composition and phylogenetic structure of AM fungal communities colonizing the roots of a toxic plant, Ligularia virgaurea, and its neighborhood plants, were analyzed in patches with different individual densities of L. virgaurea (represents the spreading degree. Community compositions of AM fungi in both root systems were changed significantly by the L. virgaurea spreading, and also these communities fitted the neutral model very well. AM fungal communities in patches with absence and presence of L. virgaurea were phylogenetically random and clustered, respectively, suggesting that the principal ecological process determining AM fungal assemblage shifted from stochastic process to environmental filtering when this toxic plant was present. Our results indicate that deterministic and stochastic processes together determine the assemblage of AM fungi, but the dominant process would be changed by the spreading of toxic plants, and suggest that the spreading of toxic plants in alpine meadow ecosystems might be involving the mycorrhizal symbionts.

  12. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-12

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.

  13. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  14. Neutronics comparative analysis of plate-type research reactor using deterministic and stochastic methods

    International Nuclear Information System (INIS)

    Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan

    2015-01-01

    Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied

  15. Safety of long-distance pipelines. Probabilistic and deterministic aspects; Sicherheit von Rohrfernleitungen. Probabilistik und Deterministik im Vergleich

    Energy Technology Data Exchange (ETDEWEB)

    Hollaender, Robert [Leipzig Univ. (Germany). Inst. fuer Infrastruktur und Ressourcenmanagement

    2013-03-15

    The Committee for Long-Distance Pipelines (Berlin, Federal Republic of Germany) reported on the relation between deterministic and probabilistic approaches in order to contribute to a better understanding of the safety management of long-distance pipelines. The respective strengths and weaknesses as well as the deterministic and probabilistic fundamentals of the safety management are described. The comparison includes fundamental aspects, but is essentially determined by the special character of the technical plant 'long-distance pipeline' as an infrastructure project in the area. This special feature results to special operation conditions and related responsibilities. However, our legal system 'long-distance pipeline' does not grant the same legal position in comparison to other infrastructural facilities such as streets and railways. Thus, the question whether and in what manner the impacts from the land-use in the environment of long-distance pipelines have to be considered is again and again the initial point for the discussion on probabilistic and deterministic approaches.

  16. Performance Analysis of Recurrence Matrix Statistics for the Detection of Deterministic Signals in Noise

    National Research Council Canada - National Science Library

    Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank

    2008-01-01

    Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...

  17. On competition in a Stackelberg location-design model with deterministic supplier choice

    NARCIS (Netherlands)

    Hendrix, E.M.T.

    2016-01-01

    We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is

  18. HOW ECOLOGICAL COMMUNITIES ARE STRUCTURED: A REVIEW ON ECOLOGICAL ASSEMBLY RULES

    Directory of Open Access Journals (Sweden)

    Gabriel Jaime Colorado Zuluaga

    Full Text Available Whether biological communities are deterministic or stochastic assemblages of species has long been a central topic of ecology. The widely demonstrated presence of structural patterns in nature may imply the existence of rules that regulate the organization of ecological communities. In this review, I present a compilation of major assembly rules that fundament, in a great proportion, the community assembly theory. Initially, I present a general overview of key concepts associated to the assembly of communities, in particular the origin of assembly rules, definition, the problem of scale and underlying mechanisms in the structure of ecological communities. Subsequently, two major approaches or paradigms (i.e. species-based and trait-based for the assembly of communities are discussed. Finally, major tested assembly rules are explored and discussed under the light of available published literature.

  19. Greek classicism in living structure? Some deductive pathways in animal morphology.

    Science.gov (United States)

    Zweers, G A

    1985-01-01

    Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs

  20. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    Science.gov (United States)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  1. Burnup-dependent core neutronics analysis of plate-type research reactor using deterministic and stochastic methods

    International Nuclear Information System (INIS)

    Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan

    2015-01-01

    Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes

  2. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib; Galassi, R. Malpica; Valorani, M.

    2016-01-01

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  3. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib

    2016-01-05

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  4. Estimation of structural reliability under combined loads

    International Nuclear Information System (INIS)

    Shinozuka, M.; Kako, T.; Hwang, H.; Brown, P.; Reich, M.

    1983-01-01

    For the overall safety evaluation of seismic category I structures subjected to various load combinations, a quantitative measure of the structural reliability in terms of a limit state probability can be conveniently used. For this purpose, the reliability analysis method for dynamic loads, which has recently been developed by the authors, was combined with the existing standard reliability analysis procedure for static and quasi-static loads. The significant parameters that enter into the analysis are: the rate at which each load (dead load, accidental internal pressure, earthquake, etc.) will occur, its duration and intensity. All these parameters are basically random variables for most of the loads to be considered. For dynamic loads, the overall intensity is usually characterized not only by their dynamic components but also by their static components. The structure considered in the present paper is a reinforced concrete containment structure subjected to various static and dynamic loads such as dead loads, accidental pressure, earthquake acceleration, etc. Computations are performed to evaluate the limit state probabilities under each load combination separately and also under all possible combinations of such loads

  5. Combination of the deterministic and probabilistic approaches for risk-informed decision-making in US NRC regulatory guides

    International Nuclear Information System (INIS)

    Patrik, M.; Babic, P.

    2001-06-01

    The report responds to the trend where probabilistic safety analyses are attached, on a voluntary basis (as yet), to the mandatory deterministic assessment of modifications of NPP systems or operating procedures, resulting in risk-informed type documents. It contains a nearly complete Czech translation of US NRC Regulatory Guide 1.177 and presents some suggestions for improving a) PSA study applications; b) the development of NPP documents for the regulatory body; and c) the interconnection between PSA and traditional deterministic analyses as contained in the risk-informed approach. (P.A.)

  6. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  7. A toolkit for integrated deterministic and probabilistic assessment for hydrogen infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Groth, Katrina M.; Tchouvelev, Andrei V.

    2014-03-01

    There has been increasing interest in using Quantitative Risk Assessment [QRA] to help improve the safety of hydrogen infrastructure and applications. Hydrogen infrastructure for transportation (e.g. fueling fuel cell vehicles) or stationary (e.g. back-up power) applications is a relatively new area for application of QRA vs. traditional industrial production and use, and as a result there are few tools designed to enable QRA for this emerging sector. There are few existing QRA tools containing models that have been developed and validated for use in small-scale hydrogen applications. However, in the past several years, there has been significant progress in developing and validating deterministic physical and engineering models for hydrogen dispersion, ignition, and flame behavior. In parallel, there has been progress in developing defensible probabilistic models for the occurrence of events such as hydrogen release and ignition. While models and data are available, using this information is difficult due to a lack of readily available tools for integrating deterministic and probabilistic components into a single analysis framework. This paper discusses the first steps in building an integrated toolkit for performing QRA on hydrogen transportation technologies and suggests directions for extending the toolkit.

  8. Introduction of Deterministic OS for SPLC in Advanced Nuclear I and C System

    Energy Technology Data Exchange (ETDEWEB)

    Son, C. W.; Kim, D. H.; Son, G. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Existing PLC's either had no output logic with regard to devices' redundant structure or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. This project has developed a real-time operating system with Redundancy handling logic in order to improve the problems mentioned. That is the scheduling method, redundancy handling function of supervisory. A controller used in Nuclear Power Plants safety system has been developed differently from industrial PLC by its design, production and test. Therefore, the operation system that controls the NPPs safety PLC should be developed properly so that it can work well on the handling logic of the safety system. Scoops has secured the deterministic for the safety PLC control by possessing the sequential scheduling, time management, redundancy handling logic and system diagnosis function considering the connectivity of all the software module by Supervisory.

  9. Hybrid Broadband Ground-Motion Simulations: Combining Long-Period Deterministic Synthetics with High-Frequency Multiple S-to-S Backscattering

    KAUST Repository

    Mai, Paul Martin; Imperatori, W.; Olsen, K. B.

    2010-01-01

    We present a new approach for computing broadband (0-10 Hz) synthetic seismograms by combining high-frequency (HF) scattering with low-frequency (LF) deterministic seismograms, considering finite-fault earthquake rupture models embedded in 3D earth structure. Site-specific HF-scattering Green's functions for a heterogeneous medium with uniformly distributed random isotropic scatterers are convolved with a source-time function that characterizes the temporal evolution of the rupture process. These scatterograms are then reconciled with the LF-deterministic waveforms using a frequency-domain optimization to match both amplitude and phase spectra around the target intersection frequency. The scattering parameters of the medium, scattering attenuation ηs, intrinsic attenuation ηi, and site-kappa, as well as frequency-dependent attenuation, determine waveform and spectral character of the HF-synthetics and thus affect the hybrid broadband seismograms. Applying our methodology to the 1994 Northridge earthquake and validating against near-field recordings at 24 sites, we find that our technique provides realistic broadband waveforms and consistently reproduces LF ground-motion intensities for two independent source descriptions. The least biased results, compared to recorded strong-motion data, are obtained after applying a frequency-dependent site-amplification factor to the broadband simulations. This innovative hybrid ground-motion simulation approach, applicable to any arbitrarily complex earthquake source model, is well suited for seismic hazard analysis and ground-motion estimation.

  10. Hybrid Broadband Ground-Motion Simulations: Combining Long-Period Deterministic Synthetics with High-Frequency Multiple S-to-S Backscattering

    KAUST Repository

    Mai, Paul Martin

    2010-09-20

    We present a new approach for computing broadband (0-10 Hz) synthetic seismograms by combining high-frequency (HF) scattering with low-frequency (LF) deterministic seismograms, considering finite-fault earthquake rupture models embedded in 3D earth structure. Site-specific HF-scattering Green\\'s functions for a heterogeneous medium with uniformly distributed random isotropic scatterers are convolved with a source-time function that characterizes the temporal evolution of the rupture process. These scatterograms are then reconciled with the LF-deterministic waveforms using a frequency-domain optimization to match both amplitude and phase spectra around the target intersection frequency. The scattering parameters of the medium, scattering attenuation ηs, intrinsic attenuation ηi, and site-kappa, as well as frequency-dependent attenuation, determine waveform and spectral character of the HF-synthetics and thus affect the hybrid broadband seismograms. Applying our methodology to the 1994 Northridge earthquake and validating against near-field recordings at 24 sites, we find that our technique provides realistic broadband waveforms and consistently reproduces LF ground-motion intensities for two independent source descriptions. The least biased results, compared to recorded strong-motion data, are obtained after applying a frequency-dependent site-amplification factor to the broadband simulations. This innovative hybrid ground-motion simulation approach, applicable to any arbitrarily complex earthquake source model, is well suited for seismic hazard analysis and ground-motion estimation.

  11. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  12. Quasi-static structural optimization under the seismic loads

    International Nuclear Information System (INIS)

    Choi, W. S.; Lee, K. M.; Kim, T. W.

    2001-01-01

    For preliminaries to optimization of SMART under the seismic loads, a quasi-static structural optimization for elastic structures under dynamic loads is presented. An equivalent static load (ESL) set is defined as a static load set, which generates the same displacement field as that from a dynamic load at a certain time. Multiple ESL sets calculated at all the time intervals are employed to represent the various states of the structure under the dynamic load. They can cover all the critical states that might happen at arbitrary times. The continuous characteristics of a dynamic load are considered by multiple static load sets. The calculated sets of ESLs are utilized as a multiple loading condition in the optimization process. A design cycle is defined as a circulated process between an analysis domain and a design domain. The analysis domain gives the loading condition needed in the design domain. The design domain gives a new updated design to be verified by the analysis domain in the next design cycle. The design cycles are iterated until the design converges. Structural optimization with dynamic loads is tangible by the proposed method. Standard example problems are solved to verify the validity of the method

  13. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  14. Deterministic and probabilistic crack growth analysis for the JRC Ispra 1/5 scale pressure vessel n0 R2

    International Nuclear Information System (INIS)

    Bruckner-Foit, A.; Munz, D.

    1989-10-01

    A deterministic and a probabilistic crack growth analysis is presented for the major defects found in the welds during ultrasonic pre-service inspection. The deterministic analysis includes first a determination of the number of load cycles until crack initiation, then a cycle-by-cycle calculation of the growth of the embedded elliptical cracks, followed by an evaluation of the growth of the semi-elliptical surface crack formed after the crack considered has broken through the wall and, finally, a determination of the critical crack size and shape. In the probabilistic analysis, a Monte-Carlo simulation is performed with a sample of cracks where the statistical distributions of the crack dimensions describe the uncertainty in sizing of the ultrasonic inspection. The distributions of crack depth, crack length and location are evaluated as a function of the number of load cycles. In the simulation, the fracture mechanics model of the deterministic analysis is employed for each random crack. The results of the deterministic and probabilistic crack growth analysis are compared with the results of the second in-service inspection where stable extension of some of the cracks had been observed. It is found that the prediction and the experiment agree only with a probability of the order of 5% or less

  15. Extended method of moments for deterministic analysis of stochastic multistable neurodynamical systems

    International Nuclear Information System (INIS)

    Deco, Gustavo; Marti, Daniel

    2007-01-01

    The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability

  16. Extended method of moments for deterministic analysis of stochastic multistable neurodynamical systems

    Science.gov (United States)

    Deco, Gustavo; Martí, Daniel

    2007-03-01

    The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.

  17. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  18. Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics

    Science.gov (United States)

    Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol

    2015-09-01

    The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.

  19. Cavity structural integrity evaluation of steam explosion using LS-DYNA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dae-Young; Park, Chang-Hwan [FNC Technology Co. Ltd., Yongin (Korea, Republic of); Kim, Kap-sun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    For investigating the mechanical response of the newly-designed NPP against an steam explosion, the cavity structural integrity evaluation was performed, in which the mechanical load resulted from a steam explosion in the reactor cavity was calculated. In the evaluation, two kinds of approach were considered, one of which is a deterministic manner and the other is a probabilistic one. In this report, the procedure and the results of the deterministic analysis are presented When entering the severe accident, the core is relocated to the lower head. In this case, an Ex-Vessel Steam Explosion(EVSE) can occur. It can threaten the structural integrity of the cavity due to the load applied to the walls or slabs of the cavity. The large amount of the energy transmitted from interaction between the molten corium and the water causes a dynamic loading onto the concrete walls resulting not only to affect the survivability of the various equipment but also to threaten the integrity of the containment. In this report, the response of the cavity wall structure is analyzed using the nonlinear finite element analysis (FEA) code. The resulting stress and strain of the structure were evaluated by the criteria in NEI07-13. Until now, deterministic analysis was performed via finite element analysis for the dynamic load generated by the steam explosion to investigate the effect on the cavity structure. A deterministic method was used in this study using the specific values of material properties and clearly defined steam explosion pressure curve. The results showed that the rebar and the liner are kept intact even at the high pressure pulse given by the steam explosion. The liner integrity is more critical to judge the preservation of the lean-tightness. In the meantime, there were found cracks in concrete media.

  20. Structural pounding of concrete frame structure with masonry infill wall under seismic loading

    Science.gov (United States)

    Ismail, Rozaina; Hasnan, Mohd Hafizudin; Shamsudin, Nurhanis

    2017-10-01

    Structural pounding is additional problem than the other harmful damage that may occurs due to the earthquake vibrations. A lot of study has been made by past researcher but most of them did not include the walls. The infill masonry walls are rarely involved analysis of structural systems but it does contribute to earthquake response of the structures. In this research, a comparison between adjacent building of 10-storey and 7-storey concrete frame structure without of masonry infill walls and the same dynamic properties of buildings. The diagonal strut approach is adopted for modeling masonry infill walls. This research also focused on finding critical building separation in order to prevent the adjacent structures from pounding. LUSAS FEA v14.03 software has been used for modeling analyzing the behavior of structures due to seismic loading and the displacement each floor of the building has been taken in order to determine the critical separation distance between the buildings. From the analysis that has been done, it is found that masonry infill walls do affect the structures behavior under seismic load. Structures without masonry infill walls needs more distance between the structures to prevent structural pounding due to higher displacement of the buildings when it sways under seismic load compared to structures with masonry infill walls. This shows that contribution of masonry infill walls to the analysis of structures cannot be neglected.

  1. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  2. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport- Demonstration of Approach and Results on Used Fuel Performance Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Adkins, Harold [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Geelhood, Ken [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Koeppel, Brian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bignell, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flores, Gregg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wang, Jy-An [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanborn, Scott [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spears, Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Klymyshyn, Nick [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-09-30

    This document addresses Oak Ridge National Laboratory milestone M2FT-13OR0822015 Demonstration of Approach and Results on Used Nuclear Fuel Performance Characterization. This report provides results of the initial demonstration of the modeling capability developed to perform preliminary deterministic evaluations of moderate-to-high burnup used nuclear fuel (UNF) mechanical performance under normal conditions of storage (NCS) and normal conditions of transport (NCT) conditions. This report also provides results from the sensitivity studies that have been performed. Finally, discussion on the long-term goals and objectives of this initiative are provided.

  3. Comparison of construction algorithms for minimal, acyclic, deterministic, finite-state automata from sets of strings

    NARCIS (Netherlands)

    Daciuk, J; Champarnaud, JM; Maurel, D

    2003-01-01

    This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.

  4. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    Science.gov (United States)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  5. Dynamic response of structures with uncertain parameters

    International Nuclear Information System (INIS)

    Cai, Z H; Liu, Y; Yang, Y

    2010-01-01

    In this paper, an interval method for the dynamic response of structures with uncertain parameters is presented. In the presented method, the structural physical and geometric parameters and loads can be considered as interval variables. The structural stiffness matrix, mass matrix and loading vectors are described as the sum of two parts corresponding to the deterministic matrix and the uncertainty of the interval parameters. The interval problem is then transformed into approximate deterministic one. The Laplace transform is used to transform the equations of the dynamic system into linear algebra equations. The Maclaurin series expansion is applied on the modified dynamic equation in order to deal with the linear algebra equations. Numerical examples are studied by the presented interval method for the cases with and without damping. The upper bound and lower bound of the dynamic responses of the examples are compared, and it shows that the presented method is effective.

  6. Response of Launch Pad Structures to Random Acoustic Excitation

    Directory of Open Access Journals (Sweden)

    Ravi N. Margasahayam

    1994-01-01

    Full Text Available The design of launch pad structures, particularly those having a large area-to-mass ratio, is governed by launch-induced acoustics, a relatively short transient with random pressure amplitudes having a non-Gaussian distribution. The factors influencing the acoustic excitation and resulting structural responses are numerous and cannot be predicted precisely. Two solutions (probabilistic and deterministic for the random vibration problem are presented in this article from the standpoint of their applicability to predict the response of ground structures exposed to rocket noise. Deficiencies of the probabilistic method, especially to predict response in the low-frequency range of launch transients (below 20 Hz, prompted the development of the deterministic analysis. The relationship between the two solutions is clarified for future implementation in a finite element method (FEM code.

  7. Floor response spectra of buildings with uncertain structural properties

    International Nuclear Information System (INIS)

    Chen, P.C.

    1975-01-01

    All Category I equipment, such as reactors, vessels, and major piping systems of nuclear power plants, is required to withstand earthquake loadings in order to minimize risk of seismic damage. The equipment is designed by using response spectra of the floor on which the equipment is mounted. The floor response spectra are constructed usually from the floor response time histories which are obtained through a deterministic dynamic analysis. This analysis assumes that all structural parameters, such as mass, stiffness, and damping have been calculated precisely, and that the earthquakes are known. However, structural parameters are usually difficult to determine precisely if the structures are massive and/or irregular, such as nuclear containments and its internal structures with foundation soil incorporated into the analysis. Faced with these uncertainties, it has been the practice to broaden the floor response spectra peaks by +-10 percent of the peak frequencies on the basis of conservatism. This approach is based on engineering judgement and does not have an analytical basis to provide a sufficient level of confidence in using these spectra for equipment design. To insure reliable design, it is necessary to know structural response variations due to variations in structural properties. This consideration leads to the treatment of structural properties as random variables and the use of probabilistic methods to predict structural response more accurately. New results on floor response spectra of buildings with uncertain structural properties obtained by determining the probabilistic dynamic response from the deterministic dynamic response and its standard deviation are presented. The resulting probabilistic floor response spectra are compared with those obtained deterministically, and are shown to provide a more reliable method for determining seismic forces

  8. Stability of a nonlinear second order equation under parametric bounded noise excitation

    International Nuclear Information System (INIS)

    Wiebe, Richard; Xie, Wei-Chau

    2016-01-01

    The motivation for the following work is a structural column under dynamic axial loads with both deterministic (harmonic transmitted forces from the surrounding structure) and random (wind and/or earthquake) loading components. The bounded noise used herein is a sinusoid with an argument composed of a random (Wiener) process deviation about a mean frequency. By this approach, a noise parameter may be used to investigate the behavior through the spectrum from simple harmonic forcing, to a bounded random process with very little harmonic content. The stability of both the trivial and non-trivial stationary solutions of an axially-loaded column (which is modeled as a second order nonlinear equation) under parametric bounded noise excitation is investigated by use of Lyapunov exponents. Specifically the effect of noise magnitude, amplitude of the forcing, and damping on stability of a column is investigated. First order averaging is employed to obtain analytical approximations of the Lyapunov exponents of the trivial solution. For the non-trivial stationary solution however, the Lyapunov exponents are obtained via Monte Carlo simulation as the stability equations become analytically intractable. (paper)

  9. Absorbing phase transitions in deterministic fixed-energy sandpile models

    Science.gov (United States)

    Park, Su-Chan

    2018-03-01

    We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.

  10. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...

  11. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    NARCIS (Netherlands)

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with

  12. Temperature regulates deterministic processes and the succession of microbial interactions in anaerobic digestion process

    Czech Academy of Sciences Publication Activity Database

    Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.

    2017-01-01

    Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016

  13. Band structure of CdTe under high pressure

    International Nuclear Information System (INIS)

    Jayam, Sr. Gerardin; Nirmala Louis, C.; Amalraj, A.

    2005-01-01

    The band structures and density of states of cadmium telluride (CdTe) under various pressures ranging from normal to 4.5 Mbar are obtained. The electronic band structure at normal pressure of CdTe (ZnS structure) is analyzed and the direct band gap value is found to be 1.654 eV. CdTe becomes metal and superconductor under high pressure but before that it undergoes structural phase transition from ZnS phase to NaCl phase. The equilibrium lattice constant, bulk modulus and the phase transition pressure at which the compounds undergo structural phase transition from ZnS to NaCl are predicted from the total energy calculations. The density of states at the Fermi level (N(E F )) gets enhanced after metallization, which leads to the superconductivity in CdTe. In our calculation, the metallization pressure (P M = 1.935 Mbar) and the corresponding reduced volume ((V/V 0 ) M = 0.458) are estimated. Metallization occurs via direct closing of band gap at Γ point. (author)

  14. Deterministic simulation of first-order scattering in virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D

    2004-07-01

    A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.

  15. Nonterminals, homomorphisms and codings in different variations of OL-systems. I. Deterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...

  16. Structural phase transitions in boron carbide under stress

    International Nuclear Information System (INIS)

    Korotaev, P; Pokatashkin, P; Yanilkin, A

    2016-01-01

    Structural transitions in boron carbide B 4 C under stress were studied by means of first-principles molecular dynamics in the framework of density functional theory. The behavior depends strongly on degree of non-hydrostatic stress. Under hydrostatic stress continuous bending of the three-atom C–B–C chain was observed up to 70 GPa. The presence of non-hydrostatic stress activates abrupt reversible chain bending, which is displacement of the central boron atom in the chain with the formation of weak bonds between this atom and atoms in the nearby icosahedra. Such structural change can describe a possible reversible phase transition in dynamical loading experiments. High non-hydrostatic stress achieved in uniaxial loading leads to disordering of the initial structure. The formation of carbon chains is observed as one possible transition route. (paper)

  17. MIRD Commentary: Proposed Name for a Dosimetry Unit Applicable to Deterministic Biological Effects-The Barendsen (Bd)

    International Nuclear Information System (INIS)

    Sgouros, George; Howell, R. W.; Bolch, Wesley E.; Fisher, Darrell R.

    2009-01-01

    The fundamental physical quantity for relating all biologic effects to radiation exposure is the absorbed dose, the energy imparted per unit mass of tissue. Absorbed dose is expressed in units of joules per kilogram (J/kg) and is given the special name gray (Gy). Exposure to ionizing radiation may cause both deterministic and stochastic biologic effects. To account for the relative effect per unit absorbed dose that has been observed for different types of radiation, the International Commission on Radiological Protection (ICRP) has established radiation weighting factors for stochastic effects. The product of absorbed dose in Gy and the radiation weighting factor is defined as the equivalent dose. Equivalent dose values are designated by a special named unit, the sievert (Sv). Unlike the situation for stochastic effects, no well-defined formalism and associated special named quantities have been widely adopted for deterministic effects. The therapeutic application of radionuclides and, specifically, -particle emitters in nuclear medicine has brought to the forefront the need for a well-defined dosimetry formalism applicable to deterministic effects that is accompanied by corresponding special named quantities. This commentary reviews recent proposals related to this issue and concludes with a recommendation to establish a new named quantity

  18. OCA-P, a deterministic and probabilistic fracture-mechanics code for application to pressure vessels

    International Nuclear Information System (INIS)

    Cheverton, R.D.; Ball, D.G.

    1984-05-01

    The OCA-P code is a probabilistic fracture-mechanics code that was prepared specifically for evaluating the integrity of pressurized-water reactor vessels when subjected to overcooling-accident loading conditions. The code has two-dimensional- and some three-dimensional-flaw capability; it is based on linear-elastic fracture mechanics; and it can treat cladding as a discrete region. Both deterministic and probabilistic analyses can be performed. For the former analysis, it is possible to conduct a search for critical values of the fluence and the nil-ductility reference temperature corresponding to incipient initiation of the initial flaw. The probabilistic portion of OCA-P is based on Monte Carlo techniques, and simulated parameters include fluence, flaw depth, fracture toughness, nil-ductility reference temperature, and concentrations of copper, nickel, and phosphorous. Plotting capabilities include the construction of critical-crack-depth diagrams (deterministic analysis) and various histograms (probabilistic analysis)

  19. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    OpenAIRE

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-01-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This...

  20. A deterministic - approach controller design for electrohydraulic position servo control system

    International Nuclear Information System (INIS)

    Johari Osman

    2000-01-01

    This paper is concerned with the design of a tracking controller for controlling electrohydraulic position servo system based on a deterministic approach. The system is treated as an uncertain system with bounded uncertainties where the bounds are assumed known. It will be shown that the electrohydraulic position servo systems with the proposed controller is practically stable and tracks the desired position in spite of the uncertainties and nonlinearities present in the system (author)

  1. Calculating the effective delayed neutron fraction in the Molten Salt Fast Reactor: Analytical, deterministic and Monte Carlo approaches

    International Nuclear Information System (INIS)

    Aufiero, Manuele; Brovchenko, Mariya; Cammi, Antonio; Clifford, Ivor; Geoffroy, Olivier; Heuer, Daniel; Laureau, Axel; Losa, Mario; Luzzi, Lelio; Merle-Lucotte, Elsa; Ricotti, Marco E.; Rouch, Hervé

    2014-01-01

    Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for β eff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (β eff ) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions β eff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed

  2. Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.

    Science.gov (United States)

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-09-22

    The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.

  3. Rapid detection of small oscillation faults via deterministic learning.

    Science.gov (United States)

    Wang, Cong; Chen, Tianrui

    2011-08-01

    Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.

  4. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    Science.gov (United States)

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  5. Influence of wind energy forecast in deterministic and probabilistic sizing of reserves

    Energy Technology Data Exchange (ETDEWEB)

    Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico

    2010-07-01

    One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)

  6. Deterministic sensitivity analysis for the numerical simulation of contaminants transport

    International Nuclear Information System (INIS)

    Marchand, E.

    2007-12-01

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  7. Use of deterministic methods in survey calculations for criticality problems

    International Nuclear Information System (INIS)

    Hutton, J.L.; Phenix, J.; Course, A.F.

    1991-01-01

    A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)

  8. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    International Nuclear Information System (INIS)

    Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat

    2014-01-01

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges

  9. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my [Nuclear Energy Department, Tenaga Nasional Berhad, Level 32, Dua Sentral, 50470 Kuala Lumpur (Malaysia); Roslan, Ridha [Nuclear Installation Division, Atomic Energy Licensing Board, Batu 24, Jalan Dengkil, 43800 Dengkil, Selangor (Malaysia); Ibrahim, Mohd Rizal Mamat [Technical Support Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor (Malaysia)

    2014-02-12

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.

  10. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  11. Deterministic secure communications using two-mode squeezed states

    International Nuclear Information System (INIS)

    Marino, Alberto M.; Stroud, C. R. Jr.

    2006-01-01

    We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state

  12. On the usage of ultrasound computational models for decision making under ambiguity

    Science.gov (United States)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  13. ZZ BOREHOLE-EB6.8-MG, multi group cross-section library for deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Kodeli, Ivo; Aldama, Daniel L.; Leege, Piet F.A. de; Legrady, David; Hoogenboom, J. Eduard

    2007-01-01

    1 - Description: Format: MATXS and ACE; Number of groups: 175 neutron, 45 gamma-ray; Nuclides: H-1, C-12, O-16, Na-23, Mg-nat, Al-27, Si-28, -29, -30, S-nat, Cl-35, -37, K-nat, Ca-nat, Mn-55, Fe-54, -56, -57, -58, I-127, W-nat. Origin: ENDF/B-VI.8; Weighting spectrum: Fission and fusion peak at high energies and a 1/E + thermal Maxwellian extension at low energies. The following materials/nuclides are included in the library: H-1, C-12, O-16, Na-23, Mg-nat, Al-27, Si-28, -29, -30, S-nat, Cl-35, -37, K-nat, Ca-nat, Fe-54, -56, -57, -58, Mn-55, I-127, W-nat. ZZ-BOREHOLE-EB6.8-MG is a multigroup cross section library for deterministic (DOORS, DANTSYS) and Monte Carlo (MCNP) transport codes developed for the oil well logging applications. The library is based on the ENDF/B-VI.8 evaluation and was processed by the NJOY-99 code. The cross sections are given in the 175 neutron and 45 gamma ray group structure. The MATXS format library can be directly used in TRANSX code to prepare the multigroup self-shielded cross sections for deterministic discrete ordinates codes like DOORS and DANTSYS. The data provided in the GROUPR and GAMINR format were converted to the MCNP ACE format by the NSLINK, SCALE and CRSRD codes. IAEA1398/03: Multigroup cross section data for Mn-55 were added in TRANSX format

  14. Autonomous choices among deterministic evolution-laws as source of uncertainty

    Science.gov (United States)

    Trujillo, Leonardo; Meyroneinc, Arnaud; Campos, Kilver; Rendón, Otto; Sigalotti, Leonardo Di G.

    2018-03-01

    We provide evidence of an extreme form of sensitivity to initial conditions in a family of one-dimensional self-ruling dynamical systems. We prove that some hyperchaotic sequences are closed-form expressions of the orbits of these pseudo-random dynamical systems. Each chaotic system in this family exhibits a sensitivity to initial conditions that encompasses the sequence of choices of the evolution rule in some collection of maps. This opens a possibility to extend current theories of complex behaviors on the basis of intrinsic uncertainty in deterministic chaos.

  15. Enhanced deterministic phase retrieval using a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa

    2012-01-01

    A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...

  16. Electronic spectrum of a deterministic single-donor device in silicon

    International Nuclear Information System (INIS)

    Fuechsle, Martin; Miwa, Jill A.; Mahapatra, Suddhasatta; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2013-01-01

    We report the fabrication of a single-electron transistor (SET) based on an individual phosphorus dopant that is deterministically positioned between the dopant-based electrodes of a transport device in silicon. Electronic characterization at mK-temperatures reveals a charging energy that is very similar to the value expected for isolated P donors in a bulk Si environment. Furthermore, we find indications for bulk-like one-electron excited states in the co-tunneling spectrum of the device, in sharp contrast to previous reports on transport through single dopants

  17. On the effect of deterministic terms on the bias in stable AR models

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2004-01-01

    This paper compares the first-order bias approximation for the autoregressive (AR) coefficients in stable AR models in the presence of deterministic terms. It is shown that the bias due to inclusion of an intercept and trend is twice as large as the bias due to an intercept. For the AR(1) model, the

  18. Deterministic Line-Shape Programming of Silicon Nanowires for Extremely Stretchable Springs and Electronics.

    Science.gov (United States)

    Xue, Zhaoguo; Sun, Mei; Dong, Taige; Tang, Zhiqiang; Zhao, Yaolong; Wang, Junzhuan; Wei, Xianlong; Yu, Linwei; Chen, Qing; Xu, Jun; Shi, Yi; Chen, Kunji; Roca I Cabarrocas, Pere

    2017-12-13

    Line-shape engineering is a key strategy to endow extra stretchability to 1D silicon nanowires (SiNWs) grown with self-assembly processes. We here demonstrate a deterministic line-shape programming of in-plane SiNWs into extremely stretchable springs or arbitrary 2D patterns with the aid of indium droplets that absorb amorphous Si precursor thin film to produce ultralong c-Si NWs along programmed step edges. A reliable and faithful single run growth of c-SiNWs over turning tracks with different local curvatures has been established, while high resolution transmission electron microscopy analysis reveals a high quality monolike crystallinity in the line-shaped engineered SiNW springs. Excitingly, in situ scanning electron microscopy stretching and current-voltage characterizations also demonstrate a superelastic and robust electric transport carried by the SiNW springs even under large stretching of more than 200%. We suggest that this highly reliable line-shape programming approach holds a strong promise to extend the mature c-Si technology into the development of a new generation of high performance biofriendly and stretchable electronics.

  19. Tribological analysis of the ventral scale structure in a Python regius in relation to laser textured surfaces

    International Nuclear Information System (INIS)

    Abdel-Aal, H A; El Mansori, M

    2013-01-01

    Laser texturing is one of the leading technologies applied to modify surface topography. To date, however, a standardized procedure to generate deterministic textures is virtually non-existent. In nature, especially in squamata, there are many examples of deterministic structured textures that allow species to control friction and condition their tribological response for efficient function. In this work, we draw a comparison between industrial surfaces and reptilian surfaces. We chose the Python regius species as a bio-analogue with a deterministic surface. We first study the structural make up of the ventral scales of the snake (both construction and metrology). We further compare the metrological features of the ventral scales to experimentally recommended performance indicators of industrial surfaces extracted from open literature. The results indicate the feasibility of engineering a laser textured surface based on the reptilian ornamentation constructs. It is shown that the metrological features, key to efficient function of a rubbing deterministic surface, are already optimized in the reptile. We further show that optimization in reptilian surfaces is based on synchronizing surface form, textures and aspects to condition the frictional response. Mimicking reptilian surfaces, we argue, may form a design methodology potentially capable of generating advanced deterministic surface constructs capable of efficient tribological function. (paper)

  20. Performance of Sweet Pepper under Protective Structure in Gazipur of Bangladesh

    Directory of Open Access Journals (Sweden)

    GMA Halim

    2013-08-01

    Full Text Available Evaluation of sweet pepper cultivation under different protective structures was made in two consecutive seasons of 2007-08 and 2008-09 at the experimental field of Horticulture Research Center of BARI, Gazipur. One popular commercial capsicum variety California Wonder was included in the study with four protective structures (low height poly tunnel, polytunnel with side open, poly tunnel with side closed and poly house including control (open field. Protective structures had remarkable and significant influence on plant growth and yield of sweet pepper. The plants grown under protective structures had higher plant height compared to that of plants grown in open field. The highest individual fruit weight (65.2g was recorded form the plants grown under poly house condition while it was the lowest from open field grown plant (3.34 g. More than five fruits were harvested when the plants were grown under poly tunnel (side closed or poly house. The maximum fruit yield per plant (334.0g was recorded from poly house, which was 160.4% higher than that of plants grown under open field condition. The second highest yield was recorded from the plants of poly tunnel (212.5 indicating bright scope for sweet pepper cultivation under protective structures.

  1. The integrity of cracked structures under thermal loading

    International Nuclear Information System (INIS)

    Townley, C.H.A.

    1976-01-01

    Previous work by Dowling and Townley on the load-carrying capacity of a cracked structure is extended so that quantitative predictions can be made about failure under thermal loading. Residual stresses can be dealt with in the same way as thermal stresses. It is shown that the tolerance of the structure to thermal stress can be quantified in terms of a parameter which defines the state of the structure. This state parameter can be deduced from the calculated performance of the structure when subjected to an external load. (author)

  2. A deterministic computer simulation model of life-cycle lamb and wool production.

    Science.gov (United States)

    Wang, C T; Dickerson, G E

    1991-11-01

    A deterministic mathematical computer model was developed to simulate effects on life-cycle efficiency of lamb and wool production from genetic improvement of performance traits under alternative management systems. Genetic input parameters can be varied for age at puberty, length of anestrus, fertility, precocity of fertility, number born, milk yield, mortality, growth rate, body fat, and wool growth. Management options include mating systems, lambing intervals, feeding levels, creep feeding, weaning age, marketing age or weight, and culling policy. Simulated growth of animals is linear from birth to inflection point, then slows asymptotically to specified mature empty BW and fat content when nutrition is not limiting. The ME intake requirement to maintain normal condition is calculated daily or weekly for maintenance, protein and fat deposition, wool growth, gestation, and lactation. Simulated feed intake is the minimum of availability, DM physical limit, or ME physiological limit. Tissue catabolism occurs when intake is below the requirement for essential functions. Mortality increases when BW is depressed. Equations developed for calculations of biological functions were validated with published and unpublished experimental data. Lifetime totals are accumulated for TDN, DM, and protein intake and for market lamb equivalent output values of empty body or carcass lean and wool from both lambs and ewes. These measures of efficiency for combinations of genetic, management, and marketing variables can provide the relative economic weighting of traits needed to derive optimal criteria for genetic selection among and within breeds under defined industry production systems.

  3. Coevolutionary and genetic algorithm based building spatial and structural design

    NARCIS (Netherlands)

    Hofmeyer, H.; Davila Delgado, J.M.

    2015-01-01

    In this article, two methods to develop and optimize accompanying building spatial and structural designs are compared. The first, a coevolutionary method, applies deterministic procedures, inspired by realistic design processes, to cyclically add a suitable structural design to the input of a

  4. Probabilistic structural analysis methods for space transportation propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.

    1991-01-01

    Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .

  5. Flow Forecasting using Deterministic Updating of Water Levels in Distributed Hydrodynamic Urban Drainage Models

    DEFF Research Database (Denmark)

    Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne

    2014-01-01

    drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...

  6. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    Science.gov (United States)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  7. How the growth rate of host cells affects cancer risk in a deterministic way

    Science.gov (United States)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  8. Towards deterministically controlled InGaAs/GaAs lateral quantum dot molecules

    International Nuclear Information System (INIS)

    Wang, L; Rastelli, A; Kiravittaya, S; Atkinson, P; Schmidt, O G; Ding, F; Bufon, C C Bof; Hermannstaedter, C; Witzany, M; Beirne, G J; Michler, P

    2008-01-01

    We report on the fabrication, detailed characterization and modeling of lateral InGaAs quantum dot molecules (QDMs) embedded in a GaAs matrix and we discuss strategies to fully control their spatial configuration and electronic properties. The three-dimensional morphology of encapsulated QDMs was revealed by selective wet chemical etching of the GaAs top capping layer and subsequent imaging by atomic force microscopy (AFM). The AFM investigation showed that different overgrowth procedures have a profound consequence on the QDM height and shape. QDMs partially capped and annealed in situ for micro-photoluminescence spectroscopy consist of shallow but well-defined quantum dots (QDs) in contrast to misleading results usually provided by surface morphology measurements when they are buried by a thin GaAs layer. This uncapping approach is crucial for determining the QDM structural parameters, which are required for modeling the system. A single-band effective-mass approximation is employed to calculate the confined electron and heavy-hole energy levels, taking the geometry and structural information extracted from the uncapping experiments as inputs. The calculated transition energy of the single QDM shows good agreement with the experimentally observed values. By decreasing the edge-to-edge distance between the two QDs within a QDM, a splitting of the electron (hole) wavefunction into symmetric and antisymmetric states is observed, indicating the presence of lateral coupling. Site control of such lateral QDMs obtained by growth on a pre-patterned substrate, combined with a technology to fabricate gate structures at well-defined positions with respect to the QDMs, could lead to deterministically controlled devices based on QDMs

  9. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  10. A Deterministic Electron, Photon, Proton and Heavy Ion Radiation Transport Suite for the Study of the Jovian System

    Science.gov (United States)

    Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William

    2011-01-01

    A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute

  11. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    Science.gov (United States)

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  12. SU-F-T-347: An Absolute Dose-Volume Constraint Based Deterministic Optimization Framework for Multi-Co60 Source Focused Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Liang, B; Liu, B; Li, Y; Guo, B; Xu, X; Wei, R; Zhou, F [Beihang University, Beijing, Beijing (China); Wu, Q [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descent technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.

  13. Frequency domain fatigue damage estimation methods suitable for deterministic load spectra

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)

    2000-07-01

    The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)

  14. Deterministic and stochastic control of chimera states in delayed feedback oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)

    2016-06-08

    Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.

  15. Structural Connectivity Asymmetry in the Neonatal Brain

    OpenAIRE

    Ratnarajah, Nagulan; Rifkin-Graboi, Anne; Fortier, Marielle V.; Chong, Yap Seng; Kwek, Kenneth; Saw, Seang-Mei; Godfrey, Keith M; Gluckman, Peter D.; Meaney, Michael J.; Qiu, Anqi

    2013-01-01

    Asymmetry of the neonatal brain is not yet understood at the level of structural connectivity. We utilized DTI deterministic tractography and structural network analysis based on graph theory to determine the pattern of structural connectivity asymmetry in 124 normal neonates. We tracted white matter axonal pathways characterizing interregional connections among brain regions and inferred asymmetry in left and right anatomical network properties. Our findings revealed that in neonates, small-...

  16. Optimization Under Uncertainty for Wake Steering Strategies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University

    2017-05-01

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.

  17. A Study on Efficiency Improvement of the Hybrid Monte Carlo/Deterministic Method for Global Transport Problems

    International Nuclear Information System (INIS)

    Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2017-01-01

    In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.

  18. Stochastic stability of mechanical systems under renewal jump process parametric excitation

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R.K.; Larsen, Jesper Winther

    2005-01-01

    A dynamic system under parametric excitation in the form of a non-Erlang renewal jump process is considered. The excitation is a random train of nonoverlapping rectangular pulses with equal, deterministic heights. The time intervals between two consecutive jumps up (or down), are the sum of two...

  19. Measures of thermodynamic irreversibility in deterministic and stochastic dynamics

    International Nuclear Information System (INIS)

    Ford, Ian J

    2015-01-01

    It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)

  20. Realization of deterministic quantum teleportation with solid state qubits

    International Nuclear Information System (INIS)

    Andreas Wallfraff

    2014-01-01

    Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)

  1. Deterministic Earthquake Hazard Assessment by Public Agencies in California

    Science.gov (United States)

    Mualchin, L.

    2005-12-01

    Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.

  2. The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy

    Directory of Open Access Journals (Sweden)

    Daniel Chicharro

    2018-03-01

    Full Text Available Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010 proposed a partial information decomposition (PID that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013 proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012 showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed.

  3. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  4. Mechanics from Newton's laws to deterministic chaos

    CERN Document Server

    Scheck, Florian

    2018-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science.   Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics.   The book contains more than 150 problems ...

  5. Agglomerative concentric hypersphere clustering applied to structural damage detection

    Science.gov (United States)

    Silva, Moisés; Santos, Adam; Santos, Reginaldo; Figueiredo, Eloi; Sales, Claudomiro; Costa, João C. W. A.

    2017-08-01

    The present paper proposes a novel cluster-based method, named as agglomerative concentric hypersphere (ACH), to detect structural damage in engineering structures. Continuous structural monitoring systems often require unsupervised approaches to automatically infer the health condition of a structure. However, when a structure is under linear and nonlinear effects caused by environmental and operational variability, data normalization procedures are also required to overcome these effects. The proposed approach aims, through a straightforward clustering procedure, to discover automatically the optimal number of clusters, representing the main state conditions of a structural system. Three initialization procedures are introduced to evaluate the impact of deterministic and stochastic initializations on the performance of this approach. The ACH is compared to state-of-the-art approaches, based on Gaussian mixture models and Mahalanobis squared distance, on standard data sets from a post-tensioned bridge located in Switzerland: the Z-24 Bridge. The proposed approach demonstrates more efficiency in modeling the normal condition of the structure and its corresponding main clusters. Furthermore, it reveals a better classification performance than the alternative ones in terms of false-positive and false-negative indications of damage, demonstrating a promising applicability in real-world structural health monitoring scenarios.

  6. Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal

    DEFF Research Database (Denmark)

    Turajlic, Samra; Xu, Hang; Litchfield, Kevin

    2018-01-01

    The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show...... that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors...... outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance....

  7. Fatigue in Steel Structures under Random Loading

    DEFF Research Database (Denmark)

    Agerskov, Henning

    1999-01-01

    types of welded plate test specimens and full-scale offshore tubular joints. The materials that have been used are either conventional structural steel with a yield stress of ~ 360-410 MPa or high-strength steel with a yield stress of ~ 810-1010 MPa. The fatigue tests and the fracture mechanics analyses......Fatigue damage accumulation in steel structures under random loading is studied. The fatigue life of welded joints has been determined both experimentally and from a fracture mechanics analysis. In the experimental part of the investigation, fatigue test series have been carried through on various...... have been carried out using load histories, which are realistic in relation to the types of structures studied, i.e. primarily bridges, offshore structures and chimneys. In general, the test series carried through show a significant difference between constant amplitude and variable amplitude fatigue...

  8. Analysis of flexible structures under lateral impact

    International Nuclear Information System (INIS)

    Ramirez, D. F.; Razavi, H.

    2012-01-01

    Three methods for analysis of flexible structures under lateral impact are presented. The first proposed method (Method A) consists of: (1) modifying an available deceleration on a rigid target with conservation principles to account for structural flexibility; and (2) transient nonlinear analysis of the structure with the corrected forcing function. The second proposed method (Method B) is similar to Method A in obtaining the forcing function but it solves the equations of motion of an idealized two-degree-of-freedom system instead of directly using conservation principles. The last method simply provides the maximum force in the structure using the conservation of energy and linear momentum. A coupled simulation is also performed in LS-DYNA and compared against the proposed methods. A case study is presented to illustrate the applicability of all three methods and the LS-DYNA simulation. (authors)

  9. Pattern recognition methodologies and deterministic evaluation of seismic hazard: A strategy to increase earthquake preparedness

    International Nuclear Information System (INIS)

    Peresan, Antonella; Panza, Giuliano F.; Gorshkov, Alexander I.; Aoudia, Abdelkrim

    2001-05-01

    Several algorithms, structured according to a general pattern-recognition scheme, have been developed for the space-time identification of strong events. Currently, two of such algorithms are applied to the Italian territory, one for the recognition of earthquake-prone areas and the other, namely CN algorithm, for earthquake prediction purposes. These procedures can be viewed as independent experts, hence they can be combined to better constrain the alerted seismogenic area. We examine here the possibility to integrate CN intermediate-term medium-range earthquake predictions, pattern recognition of earthquake-prone areas and deterministic hazard maps, in order to associate CN Times of Increased Probability (TIPs) to a set of appropriate scenarios of ground motion. The advantage of this procedure mainly consists in the time information provided by predictions, useful to increase preparedness of safety measures and to indicate a priority for detailed seismic risk studies to be performed at a local scale. (author)

  10. RDS; A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    International Nuclear Information System (INIS)

    Mohd Faiz Salim; Ridha Roslan; Mohd Rizal Mamat

    2013-01-01

    Full-text: Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBIMOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges. (author)

  11. Deterministic Construction of Plasmonic Heterostructures in Well-Organized Arrays for Nanophotonic Materials.

    Science.gov (United States)

    Liu, Xiaoying; Biswas, Sushmita; Jarrett, Jeremy W; Poutrina, Ekaterina; Urbas, Augustine; Knappenberger, Kenneth L; Vaia, Richard A; Nealey, Paul F

    2015-12-02

    Plasmonic heterostructures are deterministically constructed in organized arrays through chemical pattern directed assembly, a combination of top-down lithography and bottom-up assembly, and by the sequential immobilization of gold nanoparticles of three different sizes onto chemically patterned surfaces using tailored interaction potentials. These spatially addressable plasmonic chain nanostructures demonstrate localization of linear and nonlinear optical fields as well as nonlinear circular dichroism. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    Science.gov (United States)

    Asinari, Pietro

    2010-10-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar

  13. Theory and application of deterministic multidimensional pointwise energy lattice physics methods

    International Nuclear Information System (INIS)

    Zerkle, M.L.

    1999-01-01

    The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem

  14. Deterministic and Storable Single-Photon Source Based on a Quantum Memory

    International Nuclear Information System (INIS)

    Chen Shuai; Chen, Y.-A.; Strassel, Thorsten; Zhao Bo; Yuan Zhensheng; Pan Jianwei; Schmiedmayer, Joerg

    2006-01-01

    A single-photon source is realized with a cold atomic ensemble ( 87 Rb atoms). A single excitation, written in an atomic quantum memory by Raman scattering of a laser pulse, is retrieved deterministically as a single photon at a predetermined time. It is shown that the production rate of single photons can be enhanced considerably by a feedback circuit while the single-photon quality is conserved. Such a single-photon source is well suited for future large-scale realization of quantum communication and linear optical quantum computation

  15. Structural Integrity Evaluation of Containment Vessel under Severe Accident for PGSFR

    International Nuclear Information System (INIS)

    Lee, Seong-Hyeon; Koo, Gyeong-Hoi; Kim, Sung-Kyun

    2016-01-01

    This paper provides structural integrity evaluation results of CV of the PGSFR(Prototype Gen-IV Sodium Fast Reactor) under severe accident through transient analysis. The evaluation was carried out according to ASME B and PV Code Sec. III-Subsection NH rule. Structural integrity of CV was evaluated through transient analysis of structure in case of severe accident. Stress evaluation results for selected evaluation sections satisfy design criteria of ASME B and PV Code Sec. III Subsection NH. The transient load condition of normal operation will considered in the future work. The purpose of RVCS is to maintain the integrity of concrete structure during normal power operation. Therefore RVCS should be designed to keep the temperature of concrete surface under design limit and to minimize heat loss through CV(Containment Vessel). And in case of severe accident, the integrity of reactor structure and concrete structure should be maintained. Therefore RVCS should be designed to satisfy ASME Level D service limits. When RVCS works with breakdown of DHRS after severe accident, the temperature change of inner and outer surface of CV over time can affect structural integrity of CV. To verify the structural integrity, it is necessary to perform transient analysis of CV structure under changing temperature over time

  16. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, R [University of Alberta, Edmonton, AB (Canada); Fallone, B [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada); MagnetTx Oncology Solutions, Edmonton, AB (Canada); St Aubin, J [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. The LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been

  17. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Paganetti, H [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pair production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang

  18. Universal and Deterministic Manipulation of the Quantum State of Harmonic Oscillators: A Route to Unitary Gates for Fock State Qubits

    International Nuclear Information System (INIS)

    Santos, Marcelo Franca

    2005-01-01

    We present a simple quantum circuit that allows for the universal and deterministic manipulation of the quantum state of confined harmonic oscillators. The scheme is based on the selective interactions of the referred oscillator with an auxiliary three-level system and a classical external driving source, and enables any unitary operations on Fock states, two by two. One circuit is equivalent to a single qubit unitary logical gate on Fock states qubits. Sequences of similar protocols allow for complete, deterministic, and state-independent manipulation of the harmonic oscillator quantum state

  19. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  20. Statistics of energy levels and zero temperature dynamics for deterministic spin models with glassy behaviour

    NARCIS (Netherlands)

    Degli Esposti, M.; Giardinà, C.; Graffi, S.; Isola, S.

    2001-01-01

    We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e.,

  1. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  2. Development and application of a new deterministic method for calculating computer model result uncertainties

    International Nuclear Information System (INIS)

    Maerker, R.E.; Worley, B.A.

    1989-01-01

    Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis

  3. Procedures with deterministic risk in interventionist radiology; Procedimientos con riesgo deterministico en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Tort Ausina, Isabel; Ruiz-Cruces, Rafael; Perez Martinez, Manuel; Carrera Magarino, Francisco; Diez de los Rios, Antonio [Universidad de Malaga (Spain). Facultad de Medicina. Grupo de Investigacion en Proteccion Radiologica]. E-mail: rrcmf@uma.es

    2001-07-01

    The determinist effects in interventionist radiology are been from the irradiation in skin. The objective of this work is the estimation of the deterministic risk of the organs exposed in IR procedures. There ware selected four procedures: coated stent in abdominal aorta; shunt carry-digs; embolization of varicocele; mesenteric arteriography with venous returns. They have present maximum values of dose-area product (PDA), and it has considered the doses in organs by means of computer programs (Eff-Dose, PCXMC and OSD). The PDA has been measured with flat ionization chamber (PTW Diamentor M2). Although still few cases exist, are a high value of dose in kidney and testicles, that suppose that recommendations must be applied to avoid high exhibitions, motivating the personnel to change the irradiation fields, to use the collimation and losses rates of dose of x-ray radioscopy. Dispersion between the values of dose of the different programs is observed, which causes that it considers which is indicated of them, although seems that the Eff-Dose could be recommended, based on the Report-262 of the NRPB.

  4. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  5. Experimental demonstration of deterministic one-way quantum computing on a NMR quantum computer

    OpenAIRE

    Ju, Chenyong; Zhu, Jing; Peng, Xinhua; Chong, Bo; Zhou, Xianyi; Du, Jiangfeng

    2008-01-01

    One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report the first experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.

  6. Location deterministic biosensing from quantum-dot-nanowire assemblies

    International Nuclear Information System (INIS)

    Liu, Chao; Kim, Kwanoh; Fan, D. L.

    2014-01-01

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.

  7. Estimation of structural reliability under combined loads

    International Nuclear Information System (INIS)

    Shinozuka, M.; Kako, T.; Hwang, H.; Brown, P.; Reich, M.

    1983-01-01

    For the overall safety evaluation of seismic category I structures subjected to various load combinations, a quantitative measure of the structural reliability in terms of a limit state probability can be conveniently used. For this purpose, the reliability analysis method for dynamic loads, which has recently been developed by the authors, was combined with the existing standard reliability analysis procedure for static and quasi-static loads. The significant parameters that enter into the analysis are: the rate at which each load (dead load, accidental internal pressure, earthquake, etc.) will occur, its duration and intensity. All these parameters are basically random variables for most of the loads to be considered. For dynamic loads, the overall intensity is usually characterized not only by their dynamic components but also by their static components. The structure considered in the present paper is a reinforced concrete containment structure subjected to various static and dynamic loads such as dead loads, accidental pressure, earthquake acceleration, etc. Computations are performed to evaluate the limit state probabilities under each load combination separately and also under all possible combinations of such loads. Indeed, depending on the limit state condition to be specified, these limit state probabilities can indicate which particular load combination provides the dominant contribution to the overall limit state probability. On the other hand, some of the load combinations contribute very little to the overall limit state probability. These observations provide insight into the complex problem of which load combinations must be considered for design, for which limit states and at what level of limit state probabilities. (orig.)

  8. Foundation plate on the elastic half-space, deterministic and probabilistic approach

    Directory of Open Access Journals (Sweden)

    Tvrdá Katarína

    2017-01-01

    Full Text Available Interaction between the foundation plate and subgrade can be described by different mathematical - physical model. Elastic foundation can be modelled by different types of models, e.g. one-parametric model, two-parametric model and a comprehensive model - Boussinesque (elastic half-space had been used. The article deals with deterministic and probabilistic analysis of deflection of the foundation plate on the elastic half-space. Contact between the foundation plate and subsoil was modelled using contact elements node-node. At the end the obtained results are presented.

  9. Nodal deterministic simulation for problems of neutron shielding in multigroup formulation

    International Nuclear Information System (INIS)

    Baptista, Josue Costa; Heringer, Juan Diego dos Santos; Santos, Luiz Fernando Trindade; Alves Filho, Hermes

    2013-01-01

    In this paper, we propose the use of some computational tools, with the implementation of numerical methods SGF (Spectral Green's Function), making use of a deterministic model of transport of neutral particles in the study and analysis of a known and simplified problem of nuclear engineering, known in the literature as a problem of neutron shielding, considering the model with two energy groups. These simulations are performed in MatLab platform, version 7.0, and are presented and developed with the help of a Computer Simulator providing a friendly computer application for their utilities

  10. The state-of-the-art and problems of fuel element structural analysis

    International Nuclear Information System (INIS)

    Lassmann, K.

    1980-02-01

    This study of fuel element structural analysis is arranged in two parts: In the first, self-contained, part the general basic principles of deterministic computer programs for structural analysis of fuel elements are reviewed critically and an approach is shown which can be used to expand the system with respect to statistical investigations. The second part contains technical details summarized in 11 publications, all of which appeared in periodicals with reviewer teams. The major aspects of this study are thought to be the following ones: Contributions to the 'philosophy' of fuel element structural analysis. Critical analysis of the basic structure of computer programs. Critical analysis of the mechanical concept of integral fuel rod computer programs. Establishment of a comprehensive computer program system (URANUS). Expansion from purely deterministic information by statistical analyses. Methodological and computer program developments for the analysis of fast accidents. (orig.) 891 HP/orig. 892 MKO [de

  11. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan-Mariano de Goyeneche

    2009-05-01

    Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.

  12. Integrating Dynamic Pricing and Replenishment Decisions Under Supply Capacity Uncertainty

    OpenAIRE

    Qi Feng

    2010-01-01

    This paper examines an integrated decision-making process regarding pricing for uncertain demand and sourcing from uncertain supply, which are often studied separately in the literature. Our analysis of the integrated system suggests that the base stock list price policy fails to achieve optimality even under deterministic demand. Instead, the optimal policy is characterized by two critical values: a reorder point and a target safety stock. Under this policy, a positive order is issued if and...

  13. Application of the neo-deterministic seismic microzonation procedure in Bulgaria and validation of the seismic input against Eurocode 8

    International Nuclear Information System (INIS)

    Paskaleva, I.; Kouteva, M.; Vaccari, F.; Panza, G.F.

    2008-03-01

    The earthquake record and the Code for design and construction in seismic regions in Bulgaria have shown that the territory of the Republic of Bulgaria is exposed to a high seismic risk due to local shallow and regional strong intermediate-depth seismic sources. The available strong motion database is quite limited, and therefore not representative at all of the real hazard. The application of the neo-deterministic seismic hazard assessment procedure for two main Bulgarian cities has been capable to supply a significant database of synthetic strong motions for the target sites, applicable for earthquake engineering purposes. The main advantage of the applied deterministic procedure is the possibility to take simultaneously and correctly into consideration the contribution to the earthquake ground motion at the target sites of the seismic source and of the seismic wave propagation in the crossed media. We discuss in this study the result of some recent applications of the neo-deterministic seismic microzonation procedure to the cities of Sofia and Russe. The validation of the theoretically modeled seismic input against Eurocode 8 and the few available records at these sites is discussed. (author)

  14. Hazardous waste transportation risk assessment: Benefits of a combined deterministic and probabilistic Monte Carlo approach in expressing risk uncertainty

    International Nuclear Information System (INIS)

    Policastro, A.J.; Lazaro, M.A.; Cowen, M.A.; Hartmann, H.M.; Dunn, W.E.; Brown, D.F.

    1995-01-01

    This paper presents a combined deterministic and probabilistic methodology for modeling hazardous waste transportation risk and expressing the uncertainty in that risk. Both the deterministic and probabilistic methodologies are aimed at providing tools useful in the evaluation of alternative management scenarios for US Department of Energy (DOE) hazardous waste treatment, storage, and disposal (TSD). The probabilistic methodology can be used to provide perspective on and quantify uncertainties in deterministic predictions. The methodology developed has been applied to 63 DOE shipments made in fiscal year 1992, which contained poison by inhalation chemicals that represent an inhalation risk to the public. Models have been applied to simulate shipment routes, truck accident rates, chemical spill probabilities, spill/release rates, dispersion, population exposure, and health consequences. The simulation presented in this paper is specific to trucks traveling from DOE sites to their commercial TSD facilities, but the methodology is more general. Health consequences are presented as the number of people with potentially life-threatening health effects. Probabilistic distributions were developed (based on actual item data) for accident release amounts, time of day and season of the accident, and meteorological conditions

  15. Solution Methods for Structures with Random Properties Subject to Random Excitation

    DEFF Research Database (Denmark)

    Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.

    This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...... by the cumulant neglect closure method applied at the fourth order level....

  16. Deterministic quantum controlled-PHASE gates based on non-Markovian environments

    Science.gov (United States)

    Zhang, Rui; Chen, Tian; Wang, Xiang-Bin

    2017-12-01

    We study the realization of the quantum controlled-PHASE gate in an atom-cavity system beyond the Markovian approximation. The general description of the dynamics for the atom-cavity system without any approximation is presented. When the spectral density of the reservoir has the Lorentz form, by making use of the memory backflow from the reservoir, we can always construct the deterministic quantum controlled-PHASE gate between a photon and an atom, no matter the atom-cavity coupling strength is weak or strong. While, the phase shift in the output pulse hinders the implementation of quantum controlled-PHASE gates in the sub-Ohmic, Ohmic or super-Ohmic reservoirs.

  17. On the number of different dynamics in Boolean networks with deterministic update schedules.

    Science.gov (United States)

    Aracena, J; Demongeot, J; Fanchon, E; Montalva, M

    2013-04-01

    Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Charge sharing in multi-electrode devices for deterministic doping studied by IBIC

    International Nuclear Information System (INIS)

    Jong, L.M.; Newnham, J.N.; Yang, C.; Van Donkelaar, J.A.; Hudson, F.E.; Dzurak, A.S.; Jamieson, D.N.

    2011-01-01

    Following a single ion strike in a semiconductor device the induced charge distribution changes rapidly with time and space. This phenomenon has important applications to the sensing of ionizing radiation with applications as diverse as deterministic doping in semiconductor devices to radiation dosimetry. We have developed a new method for the investigation of this phenomenon by using a nuclear microprobe and the technique of Ion Beam Induced Charge (IBIC) applied to a specially configured sub-100 μm scale silicon device fitted with two independent surface electrodes coupled to independent data acquisition systems. The separation between the electrodes is comparable to the range of the 2 MeV He ions used in our experiments. This system allows us to integrate the total charge induced in the device by summing the signals from the independent electrodes and to measure the sharing of charge between the electrodes as a function of the ion strike location as a nuclear microprobe beam is scanned over the sensitive region of the device. It was found that for a given ion strike location the charge sharing between the electrodes allowed the beam-strike location to be determined to higher precision than the probe resolution. This result has potential application to the development of a deterministic doping technique where counted ion implantation is used to fabricate devices that exploit the quantum mechanical attributes of the implanted ions.

  19. Applications of the 3-D Deterministic Transport Attila(regsign) for Core Safety Analysis

    International Nuclear Information System (INIS)

    Lucas, D.S.; Gougar, D.; Roth, P.A.; Wareing, T.; Failla, G.; McGhee, J.; Barnett, A.

    2004-01-01

    An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila(reg s ign)) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future

  20. Empirical Analysis of Farm Credit Risk under the Structure Model

    Science.gov (United States)

    Yan, Yan

    2009-01-01

    The study measures farm credit risk by using farm records collected by Farm Business Farm Management (FBFM) during the period 1995-2004. The study addresses the following questions: (1) whether farm's financial position is fully described by the structure model, (2) what are the determinants of farm capital structure under the structure model, (3)…

  1. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  2. A deterministic model of nettle caterpillar life cycle

    Science.gov (United States)

    Syukriyah, Y.; Nuraini, N.; Handayani, D.

    2018-03-01

    Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.

  3. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  4. The role of temporal structure in human vision.

    Science.gov (United States)

    Blake, Randolph; Lee, Sang-Hun

    2005-03-01

    Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.

  5. An assessment of dietary exposure to glyphosate using refined deterministic and probabilistic methods.

    Science.gov (United States)

    Stephenson, C L; Harris, C A

    2016-09-01

    Glyphosate is a herbicide used to control broad-leaved weeds. Some uses of glyphosate in crop production can lead to residues of the active substance and related metabolites in food. This paper uses data on residue levels, processing information and consumption patterns, to assess theoretical lifetime dietary exposure to glyphosate. Initial estimates were made assuming exposure to the highest permitted residue levels in foods. These intakes were then refined using median residue levels from trials, processing information, and monitoring data to achieve a more realistic estimate of exposure. Estimates were made using deterministic and probabilistic methods. Exposures were compared to the acceptable daily intake (ADI)-the amount of a substance that can be consumed daily without an appreciable health risk. Refined deterministic intakes for all consumers were at or below 2.1% of the ADI. Variations were due to cultural differences in consumption patterns and the level of aggregation of the dietary information in calculation models, which allows refinements for processing. Probabilistic exposure estimates ranged from 0.03% to 0.90% of the ADI, depending on whether optimistic or pessimistic assumptions were made in the calculations. Additional refinements would be possible if further data on processing and from residues monitoring programmes were available. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. High-throughput deterministic single-cell encapsulation and droplet pairing, fusion, and shrinkage in a single microfluidic device.

    Science.gov (United States)

    Schoeman, Rogier M; Kemna, Evelien W M; Wolbers, Floor; van den Berg, Albert

    2014-02-01

    In this article, we present a microfluidic device capable of successive high-yield single-cell encapsulation in droplets, with additional droplet pairing, fusion, and shrinkage. Deterministic single-cell encapsulation is realized using Dean-coupled inertial ordering of cells in a Yin-Yang-shaped curved microchannel using a double T-junction, with a frequency over 2000 Hz, followed by controlled droplet pairing with a 100% success rate. Subsequently, droplet fusion is realized using electrical actuation resulting in electro-coalescence of two droplets, each containing a single HL60 cell, with 95% efficiency. Finally, volume reduction of the fused droplet up to 75% is achieved by a triple pitchfork structure. This droplet volume reduction is necessary to obtain close cell-cell membrane contact necessary for final cell electrofusion, leading to hybridoma formation, which is the ultimate aim of this research. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A mathematical analysis of an exchange-traded horse race betting fund with deterministic payoff betting strategy for institutional investment to challenge EMH

    Directory of Open Access Journals (Sweden)

    Craig George Leslie Hopf

    2015-12-01

    Full Text Available This paper’s primary alternative hypothesis is Ha: profitable exchange-traded horserace betting fund with deterministic payoff exists for acceptable institutional portfolio return—risk. The primary hypothesis challenges the semi-strong efficient market hypothesis applied to horse race wagering. An optimal deterministic betting model (DBM is derived from the existing stochastic model fundamentals, mathematical pooling principles, and new theorem. The exchange-traded betting fund (ETBF is derived from force of interest first principles. An ETBF driven by DBM processes conjointly defines the research’s betting strategy. Alpha is excess return above financial benchmark, and invokes betting strategy alpha that is composed of model alpha and fund alpha. The results and analysis from statistical testing of a global stratified data sample of three hundred galloper horse races accepted at the ninety-five percent confidence-level positive betting strategy alpha, to endorse an exchange-traded horse race betting fund with deterministic payoff into financial market.

  8. Ground motion following selection of SRS design basis earthquake and associated deterministic approach

    International Nuclear Information System (INIS)

    1991-03-01

    This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart

  9. Rate response of neurons subject to fast or frozen noise: from stochastic and homogeneous to deterministic and heterogeneous populations.

    Science.gov (United States)

    Alijani, Azadeh Khajeh; Richardson, Magnus J E

    2011-07-01

    The response of a neuronal population to afferent drive can be expected to be sensitive to both the distribution and dynamics of membrane voltages within the population. Voltage fluctuations can be driven by synaptic noise, neuromodulators, or cellular inhomogeneities: processes ranging from millisecond autocorrelation times to effectively static or "frozen" noise. Here we extend previous studies of filtered fluctuations to the experimentally verified exponential integrate-and-fire model. How fast or frozen fluctuations affect the steady-state rate and firing-rate response are both examined using perturbative solutions and limits of a 1 + 2 dimensional Fokker-Planck equation. The central finding is that, under conditions of a more-or-less constant population voltage variance, the firing-rate response is only weakly dependent on the fluctuation filter constant: The voltage distribution is the principal determinant of the population response. This result is unexpected given the nature of the systems underlying the extreme limits of fast and frozen fluctuations; the first limit represents a homogeneous population of neurons firing stochastically, whereas the second limit is equivalent to a heterogeneous population of neurons firing deterministically.

  10. Rare event computation in deterministic chaotic systems using genealogical particle analysis

    International Nuclear Information System (INIS)

    Wouters, J; Bouchet, F

    2016-01-01

    In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein–Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function. (paper)

  11. Deterministic transfer of an unknown qutrit state assisted by the low-Q microwave resonators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Tong; Zhang, Yang; Yu, Chang-Shui, E-mail: quaninformation@sina.com; Zhang, Wei-Ning

    2017-05-25

    Highlights: • We propose a scheme to achieve an unknown quantum state transfer between two flux qutrits coupled to two superconducting coplanar waveguide resonators. • The quantum state transfer can be deterministically achieved without measurements. • Because resonator photons are virtually excited during the operation time, the decoherences caused by the resonator decay and the unwanted inter-resonator crosstalk are greatly suppressed. - Abstract: Qutrits (i.e., three-level quantum systems) can be used to achieve many quantum information and communication tasks due to their large Hilbert spaces. In this work, we propose a scheme to transfer an unknown quantum state between two flux qutrits coupled to two superconducting coplanar waveguide resonators. The quantum state transfer can be deterministically achieved without measurements. Because resonator photons are virtually excited during the operation time, the decoherences caused by the resonator decay and the unwanted inter-resonator crosstalk are greatly suppressed. Moreover, our approach can be adapted to other solid-state qutrits coupled to circuit resonators. Numerical simulations show that the high-fidelity transfer of quantum state between the two qutrits is feasible with current circuit QED technology.

  12. Deterministic transfer of an unknown qutrit state assisted by the low-Q microwave resonators

    International Nuclear Information System (INIS)

    Liu, Tong; Zhang, Yang; Yu, Chang-Shui; Zhang, Wei-Ning

    2017-01-01

    Highlights: • We propose a scheme to achieve an unknown quantum state transfer between two flux qutrits coupled to two superconducting coplanar waveguide resonators. • The quantum state transfer can be deterministically achieved without measurements. • Because resonator photons are virtually excited during the operation time, the decoherences caused by the resonator decay and the unwanted inter-resonator crosstalk are greatly suppressed. - Abstract: Qutrits (i.e., three-level quantum systems) can be used to achieve many quantum information and communication tasks due to their large Hilbert spaces. In this work, we propose a scheme to transfer an unknown quantum state between two flux qutrits coupled to two superconducting coplanar waveguide resonators. The quantum state transfer can be deterministically achieved without measurements. Because resonator photons are virtually excited during the operation time, the decoherences caused by the resonator decay and the unwanted inter-resonator crosstalk are greatly suppressed. Moreover, our approach can be adapted to other solid-state qutrits coupled to circuit resonators. Numerical simulations show that the high-fidelity transfer of quantum state between the two qutrits is feasible with current circuit QED technology.

  13. Structural Analysis of Cabinet Support under Static and Seismic Loads

    International Nuclear Information System (INIS)

    Jung, Kwangsub; Lee, Sangjin; Oh, Jinho

    2014-01-01

    The cabinet support consists of frames including steel channels and steel square tubes. Four tap holes for screw bolts are located on the support frame of a steel channel to fix the cabinet on the support. The channels and square tubes are assembled by welded joints. The cabinet supports are installed on the outer walls of the reactor concrete island. The KEPIC code, MNF, is used for the design of the cabinet support. In this work, the structural integrity of the cabinet support is analyzed under consideration of static and seismic loads. A 3-D finite element model of the cabinet support was developed. The structural integrity of the cabinet support under postulated service loading conditions was evaluated through a static analysis, modal analysis, and response spectrum analysis. From the structural analysis results, it was concluded that the structural integrity of the cabinet support is guaranteed

  14. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    Science.gov (United States)

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  15. Optimal entangling operations between deterministic blocks of qubits encoded into single photons

    Science.gov (United States)

    Smith, Jake A.; Kaplan, Lev

    2018-01-01

    Here, we numerically simulate probabilistic elementary entangling operations between rail-encoded photons for the purpose of scalable universal quantum computation or communication. We propose grouping logical qubits into single-photon blocks wherein single-qubit rotations and the controlled-not (cnot) gate are fully deterministic and simple to implement. Interblock communication is then allowed through said probabilistic entangling operations. We find a promising trend in the increasing probability of successful interblock communication as we increase the number of optical modes operated on by our elementary entangling operations.

  16. A plateau–valley separation method for textured surfaces with a deterministic pattern

    DEFF Research Database (Denmark)

    Godi, Alessandro; Kühle, Anders; De Chiffre, Leonardo

    2014-01-01

    The effective characterization of textured surfaces presenting a deterministic pattern of lubricant reservoirs is an issue with which many researchers are nowadays struggling. Existing standards are not suitable for the characterization of such surfaces, providing at times values without physical...... meaning. A new method based on the separation between the plateau and valley regions is hereby presented allowing independent functional analyses of the detected features. The determination of a proper threshold between plateaus and valleys is the first step of a procedure resulting in an efficient...

  17. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  18. An Islamic Approach to Studying History: Reflections on Ibn Khaldūn’s Deterministic Historical Approach

    Directory of Open Access Journals (Sweden)

    Ali White

    2009-12-01

    Full Text Available Abstract: This paper argues that patterns exist in history, and that these can —and should — be discerned. By doing this, Muslim intellectuals will only be resuming an intellectual and spiritual journey begun over 600 years ago, by Ibn Khaldūn, who invented sociology and the scientific study of history, basing himself on the methodology of the Qur´ān. This paper examines Khaldūn’s deterministic historical approach, comparing it to the secular attempt to understand history in Karl Marx’s “historical materialism.” Khaldūn’s classification of societies (as being either based on human fiṭrah or tending towards an animal-like existence is examined and applied to current conditions. It is argued that Muslims need to learn how to use Khaldūn’s deterministic approach, critically applying it to today’s changed conditions, to contribute to the conscious creation of a new, Allah-centred global civilisation.

  19. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    Science.gov (United States)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  20. Concrete structures under impact and impulsive loading

    International Nuclear Information System (INIS)

    Plauk, G.

    1982-05-01

    This book contains papers contributed to the RILEM/CEB/IABSE/IASS-Interassociation Symposium on 'Concrete Structures under Impact and Impulsive Loading'. The essential aim of this symposium is to provide an international forum for the exchange of information on existing and current research relating to impact problems as well as to identify areas to which further research activities should be directed. The subject of the symposium is far ranging. Fifty five papers were proposed and arranged in six technical sessions, a task which sometimes posed difficulties for the Organization Committee and the Advisory Group, because some of the papers touched several topics and were difficult to integrate. However, we are confident that these minor difficulties were solved to the satisfaction of everyone involved. Each session of the symposium is devoted to a major subject area and introduced by a distinguished Introductory Reporter. The large international attendance, some 21 countries are represented, and the large number of excellent papers will certainly produce a lively discussion after each session and thus help to further close the gaps in our knowledge about the behaviour of structures and materials under impact and impulsive loading. (orig./RW)

  1. Thermomechanics of composite structures under high temperatures

    CERN Document Server

    Dimitrienko, Yu I

    2016-01-01

    This pioneering book presents new models for the thermomechanical behavior of composite materials and structures taking into account internal physico-chemical transformations such as thermodecomposition, sublimation and melting at high temperatures (up to 3000 K). It is of great importance for the design of new thermostable materials and for the investigation of reliability and fire safety of composite structures. It also supports the investigation of interaction of composites with laser irradiation and the design of heat-shield systems. Structural methods are presented for calculating the effective mechanical and thermal properties of matrices, fibres and unidirectional, reinforced by dispersed particles and textile composites, in terms of properties of their constituent phases. Useful calculation methods are developed for characteristics such as the rate of thermomechanical erosion of composites under high-speed flow and the heat deformation of composites with account of chemical shrinkage. The author expan...

  2. Portfolio optimization with structured products under return constraint

    Directory of Open Access Journals (Sweden)

    Baweja Meena

    2015-01-01

    Full Text Available A new approach for optimizing risk in a portfolio of financial instruments involving structured products is presented. This paper deals with a portfolio selection model which uses optimization methodology to minimize conditional Value-at-Risk (CVaR under return constraint. It focuses on minimizing CVaR rather than on minimizing value-at-Risk VaR, as portfolios with low CVaR necessarily have low VaR as well. We consider a simple investment problem where besides stocks and bonds, the investor can also include structured products into the investment portfolio. Due to possible intermediate payments from structured product, we have to deal with a re-investment problem modeled as a linear optimization problem.

  3. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  4. Deterministic and probabilistic approach to determine seismic risk of nuclear power plants; a practical example

    International Nuclear Information System (INIS)

    Soriano Pena, A.; Lopez Arroyo, A.; Roesset, J.M.

    1976-01-01

    The probabilistic and deterministic approaches for calculating the seismic risk of nuclear power plants are both applied to a particular case in Southern Spain. The results obtained by both methods, when varying the input data, are presented and some conclusions drawn in relation to the applicability of the methods, their reliability and their sensitivity to change

  5. Status report on multigroup cross section generation code development for high-fidelity deterministic neutronics simulation system

    International Nuclear Information System (INIS)

    Yang, W.S.; Lee, C.H.

    2008-01-01

    Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC 2 -2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC 2 -2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC 2 -2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC 2 -2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC 2 -2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC 2 -2, VIM, and NJOY. For almost all nuclides considered, MC 2 -2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC 2 -2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC 2 -2/TWODANT calculations were in good agreement with MCNP solutions within ∼0.25% Δρ, except a few small LANL fast assemblies. Relative to the MCNP solution, the MC 2 -2/TWODANT

  6. Status report on multigroup cross section generation code development for high-fidelity deterministic neutronics simulation system.

    Energy Technology Data Exchange (ETDEWEB)

    Yang, W. S.; Lee, C. H. (Nuclear Engineering Division)

    2008-05-16

    Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC{sup 2}-2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC{sup 2}-2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC{sup 2}-2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC{sup 2}-2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC{sup 2}-2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC{sup 2}-2, VIM, and NJOY. For almost all nuclides considered, MC{sup 2}-2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC{sup 2}-2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC{sup 2}-2/TWODANT calculations were in good agreement with MCNP solutions within {approx}0.25% {Delta}{rho}, except a few small LANL fast assemblies

  7. Structural Evaluation on HIC Transport Packaging under Accident Conditions

    International Nuclear Information System (INIS)

    Chung, Sung Hwan; Kim, Duck Hoi; Jung, Jin Se; Yang, Ke Hyung; Lee, Heung Young

    2005-01-01

    HIC transport packaging to transport a high integrity container(HIC) containing dry spent resin generated from nuclear power plants is to comply with the regulatory requirements of Korea and IAEA for Type B packaging due to the high radioactivity of the content, and to maintain the structural integrity under normal and accident conditions. It must withstand 9 m free drop impact onto an unyielding surface and 1 m drop impact onto a mild steel bar in a position causing maximum damage. For the conceptual design of a cylindrical HIC transport package, three dimensional dynamic structural analysis to ensure that the integrity of the package is maintained under all credible loads for 9 m free drop and 1 m puncture conditions were carried out using ABAQUS code.

  8. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  9. structural reliability of the nigerian grown abura timber bridge beam

    African Journals Online (AJOL)

    ENGR. J. I. AGUWA

    2013-07-02

    Jul 2, 2013 ... Structural analysis and deterministic design of a timber bridge beam using the Nigerian ... practice especially when it involves naturally occuring materials .... the beam due to distributed loads is; from [15]. (4). It is assumed that ...

  10. High-throughput deterministic single-cell encapsulation and droplet pairing, fusion, and shrinkage in a single microfluidic device

    NARCIS (Netherlands)

    Schoeman, R.M.; Kemna, Evelien; Wolbers, F.; van den Berg, Albert

    In this article, we present a microfluidic device capable of successive high-yield single-cell encapsulation in droplets, with additional droplet pairing, fusion, and shrinkage. Deterministic single-cell encapsulation is realized using Dean-coupled inertial ordering of cells in a Yin-Yang-shaped

  11. Stochastic production planning for a biofuel supply chain under demand and price uncertainties

    International Nuclear Information System (INIS)

    Awudu, Iddrisu; Zhang, Jun

    2013-01-01

    Highlights: ► The proposed stochastic model outperforms the deterministic model. ► The price of biofuel is modeled as Geometric Brownian Motion (GBM). ► The proposed model can be applied in any biofuel supply chain. -- Abstract: In this paper, we propose a stochastic production planning model for a biofuel supply chain under demand and price uncertainties. The supply chain consists of biomass suppliers, biofuel refinery plants and distribution centers. A stochastic linear programming model is proposed within a single-period planning framework to maximize the expected profit. Decisions such as the amount of raw materials purchased, the amount of raw materials consumed and the amount of products produced are considered. Demands of end products are uncertain with known probability distributions. The prices of end products follow Geometric Brownian Motion (GBM). Benders decomposition (BD) with Monte Carlo simulation technique is applied to solve the proposed model. To demonstrate the effectiveness of the proposed stochastic model and the decomposition algorithm, a representative supply chain for an ethanol plant in North Dakota is considered. To investigate the results of the proposed model, a simulation framework is developed to compare the performances of deterministic model and proposed stochastic model. The results from the simulation indicate the proposed model obtain higher expected profit than the deterministic model under different uncertainty settings. Sensitivity analyses are performed to gain management insight on how profit changes due to the uncertainties affect the model developed.

  12. The new deterministic 3-D radiation transport code Multitrans: C5G7 MOX fuel assembly benchmark

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2003-01-01

    The novel deterministic three-dimensional radiation transport code MultiTrans is based on combination of the advanced tree multigrid technique and the simplified P3 (SP3) radiation transport approximation. In the tree multigrid technique, an automatic mesh refinement is performed on material surfaces. The tree multigrid is generated directly from stereo-lithography (STL) files exported by computer-aided design (CAD) systems, thus allowing an easy interface for construction and upgrading of the geometry. The deterministic MultiTrans code allows fast solution of complicated three-dimensional transport problems in detail, offering a new tool for nuclear applications in reactor physics. In order to determine the feasibility of a new code, computational benchmarks need to be carried out. In this work, MultiTrans code is tested for a seven-group three-dimensional MOX fuel assembly transport benchmark without spatial homogenization (NEA C5G7 MOX). (author)

  13. Modeling a TRIGA Mark II reactor using the Attila three-dimensional deterministic transport code

    International Nuclear Information System (INIS)

    Keller, S.T.; Palmer, T.S.; Wareing, T.A.

    2005-01-01

    A benchmark model of a TRIGA reactor constructed using materials and dimensions similar to existing TRIGA reactors was analyzed using MCNP and the recently developed deterministic transport code Attila TM . The benchmark reactor requires no MCNP modeling approximations, yet is sufficiently complex to validate the new modeling techniques. Geometric properties of the benchmark reactor are specified for use by Attila TM with CAD software. Materials are treated individually in MCNP. Materials used in Attila TM that are clad are homogenized. Attila TM uses multigroup energy discretization. Two cross section libraries were constructed for comparison. A 16 group library collapsed from the SCALE 4.4.a 238 group library provided better results than a seven group library calculated with WIMS-ANL. Values of the k-effective eigenvalue and scalar flux as a function of location and energy were calculated by the two codes. The calculated values for k-effective and spatially averaged neutron flux were found to be in good agreement. Flux distribution by space and energy also agreed well. Attila TM results could be improved with increased spatial and angular resolution and revised energy group structure. (authors)

  14. Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi

    2010-01-01

    In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.

  15. The reliability of structural systems operating at high temperature: Replacing engineering judgement with operational experience

    International Nuclear Information System (INIS)

    Chevalier, M.J.; Smith, D.J.; Dean, D.W.

    2012-01-01

    Deterministic assessments are used to assess the integrity of structural systems operating at high temperature by providing a lower bound lifetime prediction, requiring considerable engineering judgement. However such a result may not satisfy the structural integrity assessment purpose if the results are overly conservative or conversely plant observations (such as failures) could undermine the assessment result if observed before the lower bound lifetime. This paper develops a reliability methodology for high temperature assessments and illustrates the impact and importance of managing the uncertainties within such an analysis. This is done by separating uncertainties into three classifications; aleatory uncertainty, quantifiable epistemic uncertainty and unquantifiable epistemic uncertainty. The result is a reliability model that can predict the behaviour of a structural system based upon plant observations, including failure and survival data. This can be used to reduce the over reliance upon engineering judgement which is prevalent in deterministic assessments. Highlights: ► Deterministic assessments are shown to be heavily reliant upon engineering judgment. ► Based upon the R5 procedure, a reliability model for a structural system is developed. ► Variables must be classified as either aleatory or epistemic to model their impact on reliability. ► Operation experience is then used to reduce reliance upon engineering judgment. ► This results in a model which can predict system behaviour and learn from operational experience.

  16. 360° deterministic magnetization rotation in a three-ellipse magnetoelectric heterostructure

    Science.gov (United States)

    Kundu, Auni A.; Chavez, Andres C.; Keller, Scott M.; Carman, Gregory P.; Lynch, Christopher S.

    2018-03-01

    A magnetic dipole-coupled magnetoelectric heterostructure comprised of three closely spaced ellipse shapes was designed and shown to be capable of achieving deterministic in-plane magnetization rotation. The design approach used a combination of conventional micromagnetic simulations to obtain preliminary configurations followed by simulations using a fully strain-coupled, time domain micromagnetic code for a detailed assessment of performance. The conventional micromagnetic code has short run times and was used to refine the ellipse shape and orientation, but it does not accurately capture the effects of the strain gradients present in the piezoelectric and magnetostrictive layers that contribute to magnetization reorientation. The fully coupled code was used to assess the effects of strain and magnetic field gradients on precessional switching in the side ellipses and on the resulting dipole-field driven magnetization reorientation in the center ellipse. The work led to a geometry with a CoFeB ellipse (125 nm × 95 nm × 4 nm) positioned between two smaller CoFeB ellipses (75 nm × 50 nm × 4 nm) on a 500 nm PZT-5H film substrate clamped at its bottom surface. The smaller ellipses were oriented at 45° and positioned at 70° and 250° about the central ellipse due to the film deposition on a thick substrate. A 7.3 V pulse applied to the PZT for 0.22 ns produced 180° switching of the magnetization in the outer ellipses that then drove switching in the center ellipse through dipole-dipole coupling. Full 360° deterministic rotation was achieved with a second pulse. The temporal response of the resulting design is discussed.

  17. Basic concept on the responses of structural members and structures under impact or impulsive loadings

    International Nuclear Information System (INIS)

    Takeda, J.I.; Tachikawa, H.; Fujimoto, K.

    1982-01-01

    The responses of structural members and structures subjected to impact or impulsive loadings are generated by the interaction between acting bodies and structures, and the interaction is affected by many factors, e.g. the relations of masses, sizes, rigidities, etc. between acting bodies and structures and especially by relative velocity. The development of the responses of structural members and structures are controlled by the constitutive equations and failure criteria of constituent materials, the relationships of cowork system between the constituent materials and existing stress waves. Furthermore, the first two are influenced by rate effects and they all widely change by the speeds of impact and impulsive loadings. This paper deals with the physical meaning of the responses of structures under impact and impulsive loadings. (orig.) [de

  18. Using pixel intensity as a self-regulating threshold for deterministic image sampling in Milano Retinex: the T-Rex algorithm

    Science.gov (United States)

    Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro

    2018-01-01

    Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.

  19. Quantifying diffusion MRI tractography of the corticospinal tract in brain tumors with deterministic and probabilistic methods.

    Science.gov (United States)

    Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S; Henry, Roland G

    2013-01-01

    Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm(2)) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best

  20. Stochasticity and determinism in models of hematopoiesis.

    Science.gov (United States)

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.