WorldWideScience

Sample records for case minimal deterministic

  1. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    Science.gov (United States)

    Asinari, Pietro

    2010-10-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar

  2. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  3. Comparison of construction algorithms for minimal, acyclic, deterministic, finite-state automata from sets of strings

    NARCIS (Netherlands)

    Daciuk, J; Champarnaud, JM; Maurel, D

    2003-01-01

    This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.

  4. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.

    Science.gov (United States)

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.

  5. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    Science.gov (United States)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  6. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  7. Deterministic Echo State Networks Based Stock Price Forecasting

    Directory of Open Access Journals (Sweden)

    Jingpei Dan

    2014-01-01

    Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.

  8. Deterministic bound for avionics switched networks according to networking features using network calculus

    Directory of Open Access Journals (Sweden)

    Feng HE

    2017-12-01

    Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15–20%, which just coincides with the statistical data (18–22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks

  9. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    Science.gov (United States)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  10. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  11. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2016-01-01

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  12. The probabilistic approach and the deterministic licensing procedure

    International Nuclear Information System (INIS)

    Fabian, H.; Feigel, A.; Gremm, O.

    1984-01-01

    If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)

  13. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.

    2004-01-01

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  14. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  15. Opportunity-based age replacement policy with minimal repair

    International Nuclear Information System (INIS)

    Jhang, J.P.; Sheu, S.H.

    1999-01-01

    This paper proposes an opportunity-based age replacement policy with minimal repair. The system has two types of failures. Type I failures (minor failures) are removed by minimal repairs, whereas type II failures are removed by replacements. Type I and type II failures are age-dependent. A system is replaced at type II failure (catastrophic failure) or at the opportunity after age T, whichever occurs first. The cost of the minimal repair of the system at age z depends on the random part C(z) and the deterministic part c(z). The opportunity arises according to a Poisson process, independent of failures of the component. The expected cost rate is obtained. The optimal T * which would minimize the cost rate is discussed. Various special cases are considered. Finally, a numerical example is given

  16. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  17. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2017-01-01

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....

  18. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....

  19. Deterministic dense coding with partially entangled states

    Science.gov (United States)

    Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni

    2005-01-01

    The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.

  20. Operational State Complexity of Deterministic Unranked Tree Automata

    Directory of Open Access Journals (Sweden)

    Xiaoxue Piao

    2010-08-01

    Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.

  1. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  2. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  3. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  4. The dialectical thinking about deterministic and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong

    2005-01-01

    There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)

  5. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  6. Deterministic behavioural models for concurrency

    DEFF Research Database (Denmark)

    Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn

    1993-01-01

    This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....

  7. Shock-induced explosive chemistry in a deterministic sample configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith

    2005-10-01

    Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.

  8. Risk-based and deterministic regulation

    International Nuclear Information System (INIS)

    Fischer, L.E.; Brown, N.W.

    1995-07-01

    Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose

  9. From Ordinary Differential Equations to Structural Causal Models: the deterministic case

    NARCIS (Netherlands)

    Mooij, J.M.; Janzing, D.; Schölkopf, B.; Nicholson, A.; Smyth, P.

    2013-01-01

    We show how, and under which conditions, the equilibrium states of a first-order Ordinary Differential Equation (ODE) system can be described with a deterministic Structural Causal Model (SCM). Our exposition sheds more light on the concept of causality as expressed within the framework of

  10. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  11. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    Science.gov (United States)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  12. Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series

    Directory of Open Access Journals (Sweden)

    Weijun Wang

    2016-01-01

    Full Text Available Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL; postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes; no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n=1, 41–60 (n=1, and >60 (n=55; scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy.

  13. Deterministic chaos in entangled eigenstates

    Science.gov (United States)

    Schlegel, K. G.; Förster, S.

    2008-05-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  14. Deterministic chaos in entangled eigenstates

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)

    2008-05-12

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  15. Deterministic chaos in entangled eigenstates

    International Nuclear Information System (INIS)

    Schlegel, K.G.; Foerster, S.

    2008-01-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator

  16. Treatment of cervical agenesis with minimally invasive therapy: Case report

    Directory of Open Access Journals (Sweden)

    Azami Denas Azinar

    2017-11-01

    Full Text Available Cervical agenesis is very rare congenital disorder case with cervical not formed. Because of cervical clogged so that menstruation can not be drained. We Report the case of a19 years old women still single with endometrioma, hematometra, cervical agenesis and perform surgery combination laparoscopy and transvaginally with laparoscopic cystectomy, neocervix, and use catheter no 24f in the new cervix. And now she can currently be normal menstruation. Minimally invasive theraphy of congenital anomalies case is recommended to save reproductive function. Keywords: Cervical agenesis, minimal invasive theraphy

  17. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  18. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.

    2014-02-01

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  19. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  20. Exploring the stochastic and deterministic aspects of cyclic emission variability on a high speed spark-ignition engine

    International Nuclear Information System (INIS)

    Karvountzis-Kontakiotis, A.; Dimaratos, A.; Ntziachristos, L.; Samaras, Z.

    2017-01-01

    This study contributes to the understanding of cycle-to-cycle emissions variability (CEV) in premixed spark-ignition combustion engines. A number of experimental investigations of cycle-to-cycle combustion variability (CCV) exist in published literature; however only a handful of studies deal with CEV. This study experimentally investigates the impact of CCV on CEV of NO and CO, utilizing experimental results from a high-speed spark-ignition engine. Both CEV and CCV are shown to comprise a deterministic and a stochastic component. Results show that at maximum break torque (MBT) operation, the indicated mean effective pressure (IMEP) maximizes and its coefficient of variation (COV_I_M_E_P) minimizes, leading to minimum variation of NO. NO variability and hence mean NO levels can be reduced by more than 50% and 30%, respectively, at advanced ignition timing, by controlling the deterministic CCV using cycle resolved combustion control. The deterministic component of CEV increases at lean combustion (lambda = 1.12) and this overall increases NO variability. CEV was also found to decrease with engine load. At steady speed, increasing throttle position from 20% to 80%, decreased COV_I_M_E_P, COV_N_O and COV_C_O by 59%, 46%, and 6% respectively. Highly resolved engine control, by means of cycle-to-cycle combustion control, appears as key to limit the deterministic feature of cyclic variability and by that to overall reduce emission levels. - Highlights: • Engine emissions variability comprise both stochastic and deterministic components. • Lean and diluted combustion conditions increase emissions variability. • Advanced ignition timing enhances the deterministic component of variability. • Load increase decreases the deterministic component of variability. • The deterministic component can be reduced by highly resolved combustion control.

  1. Deterministic and stochastic evolution equations for fully dispersive and weakly nonlinear waves

    DEFF Research Database (Denmark)

    Eldeberky, Y.; Madsen, Per A.

    1999-01-01

    and stochastic formulations are solved numerically for the case of cross shore motion of unidirectional waves and the results are verified against laboratory data for wave propagation over submerged bars and over a plane slope. Outside the surf zone the two model predictions are generally in good agreement......This paper presents a new and more accurate set of deterministic evolution equations for the propagation of fully dispersive, weakly nonlinear, irregular, multidirectional waves. The equations are derived directly from the Laplace equation with leading order nonlinearity in the surface boundary...... is significantly underestimated for larger wave numbers. In the present work we correct this inconsistency. In addition to the improved deterministic formulation, we present improved stochastic evolution equations in terms of the energy spectrum and the bispectrum for multidirectional waves. The deterministic...

  2. Deterministic extraction from weak random sources

    CERN Document Server

    Gabizon, Ariel

    2011-01-01

    In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.

  3. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  4. Experimental aspects of deterministic secure quantum key distribution

    Energy Technology Data Exchange (ETDEWEB)

    Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)

    2008-07-01

    Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.

  5. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  6. Nonterminals, homomorphisms and codings in different variations of OL-systems. I. Deterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...

  7. Minimally invasive surgical treatment of Bertolotti's Syndrome: case report.

    Science.gov (United States)

    Ugokwe, Kene T; Chen, Tsu-Lee; Klineberg, Eric; Steinmetz, Michael P

    2008-05-01

    This article aims to provide more insight into the presentation, diagnosis, and treatment of Bertolotti's syndrome, which is a rare spinal disorder that is very difficult to recognize and diagnose correctly. The syndrome was first described by Bertolotti in 1917 and affects approximately 4 to 8% of the population. It is characterized by an enlarged transverse process at the most caudal lumbar vertebra with a pseudoarticulation of the transverse process and the sacral ala. It tends to present with low back pain and may be confused with facet and sacroiliac joint disease. In this case report, we describe a 40-year-old man who presented with low back pain and was eventually diagnosed with Bertolotti's syndrome. The correct diagnosis was made based on imaging studies which included computed tomographic scans, plain x-rays, and magnetic resonance imaging scans. The patient experienced temporary relief when the abnormal pseudoarticulation was injected with a cocktail consisting of lidocaine and steroids. In order to minimize the trauma associated with surgical treatment, a minimally invasive approach was chosen to resect the anomalous transverse process with the accompanying pseudoarticulation. The patient did well postoperatively and had 97% resolution of his pain at 6 months after surgery. As with conventional surgical approaches, a complete knowledge of anatomy is required for minimally invasive spine surgery. This case is an example of the expanding utility of minimally invasive approaches in treating spinal disorders.

  8. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    Science.gov (United States)

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  9. Deterministic methods in radiation transport

    International Nuclear Information System (INIS)

    Rice, A.F.; Roussin, R.W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community

  10. Ordinal optimization and its application to complex deterministic problems

    Science.gov (United States)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  11. Design of deterministic interleaver for turbo codes

    International Nuclear Information System (INIS)

    Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.

    2008-01-01

    The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)

  12. Proving Non-Deterministic Computations in Agda

    Directory of Open Access Journals (Sweden)

    Sergio Antoy

    2017-01-01

    Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.

  13. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  14. Progress in nuclear well logging modeling using deterministic transport codes

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.

    2002-01-01

    Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)

  15. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    Science.gov (United States)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  16. Deterministic chaos in the pitting phenomena of passivable alloys

    International Nuclear Information System (INIS)

    Hoerle, Stephane

    1998-01-01

    It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr

  17. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  18. Deterministic indexing for packed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye

    2017-01-01

    Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...

  19. Minimally invasive treatment of hepatic adenoma in special cases

    Energy Technology Data Exchange (ETDEWEB)

    Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil); Odisio, Bruno Calazans [MD Anderson Cancer Center, Houston (United States); Garcia, Rodrigo Gobbo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil)

    2013-07-01

    Hepatocellular adenoma is a rare benign tumor that was increasingly diagnosed in the 1980s and 1990s. This increase has been attributed to the widespread use of oral hormonal contraceptives and the broader availability and advances of radiological tests. We report two cases of patients with large hepatic adenomas who were subjected to minimally invasive treatment using arterial embolization. One case underwent elective embolization due to the presence of multiple adenomas and recent bleeding in one of the nodules. The second case was a victim of blunt abdominal trauma with rupture of a hepatic adenoma and clinical signs of hemodynamic shock secondary to intra-abdominal hemorrhage, which required urgent treatment. The development of minimally invasive locoregional treatments, such as arterial embolization, introduced novel approaches for the treatment of individuals with hepatic adenoma. The mortality rate of emergency resection of ruptured hepatic adenomas varies from 5 to 10%, but this rate decreases to 1% when resection is elective. Arterial embolization of hepatic adenomas in the presence of bleeding is a subject of debate. This observation suggests a role for transarterial embolization in the treatment of ruptured and non-ruptured adenomas, which might reduce the indication for surgery in selected cases and decrease morbidity and mortality. Magnetic resonance imaging showed a reduction of the embolized lesions and significant avascular component 30 days after treatment in the two cases in this report. No novel lesions were observed, and a reduction in the embolized lesions was demonstrated upon radiological assessment at a 12-month follow-up examination.

  20. Minimally invasive treatment of hepatic adenoma in special cases

    International Nuclear Information System (INIS)

    Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo; Odisio, Bruno Calazans; Garcia, Rodrigo Gobbo

    2013-01-01

    Hepatocellular adenoma is a rare benign tumor that was increasingly diagnosed in the 1980s and 1990s. This increase has been attributed to the widespread use of oral hormonal contraceptives and the broader availability and advances of radiological tests. We report two cases of patients with large hepatic adenomas who were subjected to minimally invasive treatment using arterial embolization. One case underwent elective embolization due to the presence of multiple adenomas and recent bleeding in one of the nodules. The second case was a victim of blunt abdominal trauma with rupture of a hepatic adenoma and clinical signs of hemodynamic shock secondary to intra-abdominal hemorrhage, which required urgent treatment. The development of minimally invasive locoregional treatments, such as arterial embolization, introduced novel approaches for the treatment of individuals with hepatic adenoma. The mortality rate of emergency resection of ruptured hepatic adenomas varies from 5 to 10%, but this rate decreases to 1% when resection is elective. Arterial embolization of hepatic adenomas in the presence of bleeding is a subject of debate. This observation suggests a role for transarterial embolization in the treatment of ruptured and non-ruptured adenomas, which might reduce the indication for surgery in selected cases and decrease morbidity and mortality. Magnetic resonance imaging showed a reduction of the embolized lesions and significant avascular component 30 days after treatment in the two cases in this report. No novel lesions were observed, and a reduction in the embolized lesions was demonstrated upon radiological assessment at a 12-month follow-up examination

  1. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  2. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54

  3. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  4. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  5. Recognition of deterministic ETOL languages in logarithmic space

    DEFF Research Database (Denmark)

    Jones, Neil D.; Skyum, Sven

    1977-01-01

    It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...

  6. Pseudo-random number generator based on asymptotic deterministic randomness

    Science.gov (United States)

    Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming

    2008-06-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.

  7. Pseudo-random number generator based on asymptotic deterministic randomness

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming

    2008-01-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks

  8. Equivalence relations between deterministic and quantum mechanical systems

    International Nuclear Information System (INIS)

    Hooft, G.

    1988-01-01

    Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale

  9. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    International Nuclear Information System (INIS)

    Elkhoraibi, T.; Hashemi, A.; Ostadan, F.

    2014-01-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  10. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  11. Minimization for conditional simulation: Relationship to optimal transport

    Science.gov (United States)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  12. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  13. RBE for deterministic effects

    International Nuclear Information System (INIS)

    1990-01-01

    In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)

  14. DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    MICULEAC Melania Elena

    2014-06-01

    Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.

  15. The integrated model for solving the single-period deterministic inventory routing problem

    Science.gov (United States)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  16. Severe deterministic effects of external exposure and intake of radioactive material: basis for emergency response criteria

    International Nuclear Information System (INIS)

    Kutkov, V; Buglova, E; McKenna, T

    2011-01-01

    Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.

  17. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  18. Deterministic SLIR model for tuberculosis disease mapping

    Science.gov (United States)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  19. Design of deterministic OS for SPLC

    International Nuclear Information System (INIS)

    Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop

    2012-01-01

    Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events

  20. Deterministic Diffusion in Delayed Coupled Maps

    International Nuclear Information System (INIS)

    Sozanski, M.

    2005-01-01

    Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)

  1. CSL model checking of deterministic and stochastic Petri nets

    NARCIS (Netherlands)

    Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.

    2006-01-01

    Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under

  2. Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation

    CERN Document Server

    Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F

    2002-01-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...

  3. The construction of minimal multilayered perceptrons : a case study for sorting

    NARCIS (Netherlands)

    Zwietering, P.J.; Aarts, E.H.L.; Wessels, J.

    1993-01-01

    We consider the construction of minimal multilayered perceptrons for solving combinatorial optimization problems. Though general in nature, the proposed construction method is presented as a case study for the sorting problem. The presentation starts with an O((n!)2) three-layered perceptron based

  4. Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)

    International Nuclear Information System (INIS)

    2014-01-01

    The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References

  5. Optimal Deterministic Investment Strategies for Insurers

    Directory of Open Access Journals (Sweden)

    Ulrich Rieder

    2013-11-01

    Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.

  6. A data analysis method for identifying deterministic components of stable and unstable time-delayed systems with colored noise

    Energy Technology Data Exchange (ETDEWEB)

    Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)

    2006-12-18

    A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.

  7. Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly

    Directory of Open Access Journals (Sweden)

    Mehdi Sharifi

    2017-12-01

    Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important. 

  8. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.

  9. Coefficient of reversibility and two particular cases of deterministic many body systems

    International Nuclear Information System (INIS)

    Grossu, Ioan Valeriu; Besliu, Calin; Jipa, Alexandru

    2004-01-01

    We discuss the importance of a new measure of chaos in study of nonlinear dynamic systems, the - coefficient of reversibility-. This is defined as the probability of returning in the same point of phasic space. Is very interesting to compare this coefficient with other measures like fractal dimension or Liapunov exponent. We have also studied two very interesting many body systems, both having any number of particles but a deterministic evolution. One system is composed by n particles initially at rest, having the same mass and interacting through harmonic bi-particle forces, other is composed by two types of particles (with mass m 1 and mass m 2 ) initially at rest and interacting too through harmonic bi-particle forces

  10. Deterministic hydrodynamics: Taking blood apart

    Science.gov (United States)

    Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.

    2006-10-01

    We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication

  11. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    International Nuclear Information System (INIS)

    Sardanyes, Josep; Sole, Ricard V.

    2008-01-01

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations

  12. Deterministic and stochastic CTMC models from Zika disease transmission

    Science.gov (United States)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  13. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Harrisson, G.; Marleau, G.

    2012-01-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  14. ICRP (1991) and deterministic effects

    International Nuclear Information System (INIS)

    Mole, R.H.

    1992-01-01

    A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)

  15. Improving the performance of minimizers and winnowing schemes.

    Science.gov (United States)

    Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl

    2017-07-15

    The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.

    Science.gov (United States)

    Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko

    2015-08-01

    When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  18. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  19. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    International Nuclear Information System (INIS)

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-01-01

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching

  20. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    Science.gov (United States)

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  1. Transformation of general binary MRF minimization to the first-order case.

    Science.gov (United States)

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  2. Frequency domain fatigue damage estimation methods suitable for deterministic load spectra

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)

    2000-07-01

    The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)

  3. Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case

    International Nuclear Information System (INIS)

    Bettoni, Dario; Colombo, Mattia; Liberati, Stefano

    2014-01-01

    Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales

  4. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  5. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  6. Deterministic nonlinear systems a short course

    CERN Document Server

    Anishchenko, Vadim S; Strelkova, Galina I

    2014-01-01

    This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems.  This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.

  7. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  8. Learning to Act: Qualitative Learning of Deterministic Action Models

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2017-01-01

    In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...

  9. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  10. Deterministic and probabilistic approach to determine seismic risk of nuclear power plants; a practical example

    International Nuclear Information System (INIS)

    Soriano Pena, A.; Lopez Arroyo, A.; Roesset, J.M.

    1976-01-01

    The probabilistic and deterministic approaches for calculating the seismic risk of nuclear power plants are both applied to a particular case in Southern Spain. The results obtained by both methods, when varying the input data, are presented and some conclusions drawn in relation to the applicability of the methods, their reliability and their sensitivity to change

  11. Deterministic and unambiguous dense coding

    International Nuclear Information System (INIS)

    Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.

    2006-01-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states

  12. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    International Nuclear Information System (INIS)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua

    2010-01-01

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  13. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com

    2010-04-15

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  14. Offshore platforms and deterministic ice actions: Kashagan phase 2 development: North Caspian Sea.

    Energy Technology Data Exchange (ETDEWEB)

    Croasdale, Ken [KRCA, Calgary (Canada); Jordaan, Ian [Ian Jordaan and Associates, St John' s (Canada); Verlaan, Paul [Shell Development Kashagan, London (United Kingdom)

    2011-07-01

    The Kashagan development has to face the difficult conditions of the northern Caspian Sea. This paper investigated ice interaction scenarios and deterministic methods used on platform designs for the Kashagan development. The study presents first a review of the types of platforms in use and being designed for the Kashagan development. The various ice load scenarios and the structures used in each case are discussed. Vertical faced barriers, mobile drilling barges and sheet pile islands were used for the ice loads on vertical structures. Sloping faced barriers and islands of rock were used for the ice loads on sloping structures. Deterministic models such as the model in ISO 19906 were used to calculate the loads occurring with or without ice rubble in front of the structure. The results showed the importance of rubble build-up in front of wide structures in shallow water. Recommendations were provided for building efficient vertical and sloping faced barriers.

  15. Learn with SAT to Minimize Büchi Automata

    Directory of Open Access Journals (Sweden)

    Stephan Barth

    2012-10-01

    Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.

  16. The State of Deterministic Thinking among Mothers of Autistic Children

    Directory of Open Access Journals (Sweden)

    Mehrnoush Esbati

    2011-10-01

    Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.

  17. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  18. A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT

    International Nuclear Information System (INIS)

    S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS

    1998-01-01

    A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems

  19. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  20. Deterministic geologic processes and stochastic modeling

    International Nuclear Information System (INIS)

    Rautman, C.A.; Flint, A.L.

    1992-01-01

    This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling

  1. Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle

    2002-01-01

    In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....

  2. Understanding deterministic diffusion by correlated random walks

    International Nuclear Information System (INIS)

    Klages, R.; Korabel, N.

    2002-01-01

    Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)

  3. Deterministic one-way simulation of two-way, real-time cellular automata and its related problems

    Energy Technology Data Exchange (ETDEWEB)

    Umeo, H; Morita, K; Sugata, K

    1982-06-13

    The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.

  4. Optimal Allocation of Renewable Energy Sources for Energy Loss Minimization

    Directory of Open Access Journals (Sweden)

    Vaiju Kalkhambkar

    2017-03-01

    Full Text Available Optimal allocation of renewable distributed generation (RDG, i.e., solar and the wind in a distribution system becomes challenging due to intermittent generation and uncertainty of loads. This paper proposes an optimal allocation methodology for single and hybrid RDGs for energy loss minimization. The deterministic generation-load model integrated with optimal power flow provides optimal solutions for single and hybrid RDG. Considering the complexity of the proposed nonlinear, constrained optimization problem, it is solved by a robust and high performance meta-heuristic, Symbiotic Organisms Search (SOS algorithm. Results obtained from SOS algorithm offer optimal solutions than Genetic Algorithm (GA, Particle Swarm Optimization (PSO and Firefly Algorithm (FFA. Economic analysis is carried out to quantify the economic benefits of energy loss minimization over the life span of RDGs.

  5. Extended method of moments for deterministic analysis of stochastic multistable neurodynamical systems

    International Nuclear Information System (INIS)

    Deco, Gustavo; Marti, Daniel

    2007-01-01

    The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability

  6. Extended method of moments for deterministic analysis of stochastic multistable neurodynamical systems

    Science.gov (United States)

    Deco, Gustavo; Martí, Daniel

    2007-03-01

    The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.

  7. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  8. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  9. Unicameral bone cyst of the calcaneus - minimally invasive endoscopic surgical treatment. Case report.

    Science.gov (United States)

    Stoica, Ioan Cristian; Pop, Doina Mihaela; Grosu, Florin

    2017-01-01

    The role of arthroscopic surgery for the treatment of various orthopedic pathologies has greatly improved during the last years. Recent publications showed that benign bone lesion may benefit from this minimally invasive surgical method, in order to minimize the invasiveness and the period of immobilization and to increase visualization. Unicameral bone cysts may be adequately treated by minimally invasive endoscopic surgery. The purpose of the current paper is to present the case report of a patient with a unicameral bone cyst of the calcaneus that underwent endoscopically assisted treatment with curettage and bone grafting with allograft from a bone bank, with emphasis on the surgical technique. Unicameral bone cyst is a benign bone lesion, which can be adequately treated by endoscopic curettage and percutaneous injection of morselized bone allograft in symptomatic patients.

  10. Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble

    Science.gov (United States)

    Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei

    2016-10-01

    We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.

  11. Local deterministic theory surviving the violation of Bell's inequalities

    International Nuclear Information System (INIS)

    Cormier-Delanoue, C.

    1984-01-01

    Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr

  12. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  13. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  14. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    Science.gov (United States)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  15. A Theory of Deterministic Event Structures

    NARCIS (Netherlands)

    Lee, I.; Rensink, Arend; Smolka, S.A.

    1995-01-01

    We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a

  16. Anti-deterministic behaviour of discrete systems that are less predictable than noise

    Science.gov (United States)

    Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.

    2005-05-01

    We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.

  17. Deterministic dynamics of plasma focus discharges

    International Nuclear Information System (INIS)

    Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.

    1992-04-01

    The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs

  18. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    Science.gov (United States)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  19. On the implementation of a deterministic secure coding protocol using polarization entangled photons

    OpenAIRE

    Ostermeyer, Martin; Walenta, Nino

    2007-01-01

    We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...

  20. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  1. Entropy Generation Minimization in Dimethyl Ether Synthesis: A Case Study

    Science.gov (United States)

    Kingston, Diego; Razzitte, Adrián César

    2018-04-01

    Entropy generation minimization is a method that helps improve the efficiency of real processes and devices. In this article, we study the entropy production (due to chemical reactions, heat exchange and friction) in a conventional reactor that synthesizes dimethyl ether and minimize it by modifying different operating variables of the reactor, such as composition, temperature and pressure, while aiming at a fixed production of dimethyl ether. Our results indicate that it is possible to reduce the entropy production rate by nearly 70 % and that, by changing only the inlet composition, it is possible to cut it by nearly 40 %, though this comes at the expense of greater dissipation due to heat transfer. We also study the alternative of coupling the reactor with another, where dehydrogenation of methylcyclohexane takes place. In that case, entropy generation can be reduced by 54 %, when pressure, temperature and inlet molar flows are varied. These examples show that entropy generation analysis can be a valuable tool in engineering design and applications aiming at process intensification and efficient operation of plant equipment.

  2. Deterministic secure communication protocol without using entanglement

    OpenAIRE

    Cai, Qing-yu

    2003-01-01

    We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.

  3. Influence of wind energy forecast in deterministic and probabilistic sizing of reserves

    Energy Technology Data Exchange (ETDEWEB)

    Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico

    2010-07-01

    One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)

  4. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    Science.gov (United States)

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual

  5. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  6. Deterministic network interdiction optimization via an evolutionary approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem

  7. Deterministic methods for multi-control fuel loading optimization

    Science.gov (United States)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  8. Distinguishing deterministic and noise components in ELM time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N

    2004-01-01

    Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type

  9. Optimization of structures subjected to dynamic load: deterministic and probabilistic methods

    Directory of Open Access Journals (Sweden)

    Élcio Cassimiro Alves

    Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.

  10. TACE with Ar-He Cryosurgery Combined Minimal Invasive Technique for the Treatment of Primary NSCLC in 139 Cases

    Directory of Open Access Journals (Sweden)

    Yunzhi ZHOU

    2010-01-01

    Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications

  11. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    Science.gov (United States)

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the

  12. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

  13. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg

    2006-01-01

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology

  14. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2009-02-03

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  15. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2012-02-01

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  16. Development of a model for unsteady deterministic stresses adapted to the multi-stages turbomachines simulation; Developpement d'un modele de tensions deterministes instationnaires adapte a la simulation de turbomachines multi-etagees

    Energy Technology Data Exchange (ETDEWEB)

    Charbonnier, D.

    2004-12-15

    The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)

  17. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    International Nuclear Information System (INIS)

    Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing

    2014-01-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)

  18. Towards deterministic optical quantum computation with coherently driven atomic ensembles

    International Nuclear Information System (INIS)

    Petrosyan, David

    2005-01-01

    Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons

  19. Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region

    International Nuclear Information System (INIS)

    Panza, G.F.; Vaccari, F.; Kouteva, M.

    2008-03-01

    A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)

  20. Classification and unification of the microscopic deterministic traffic models.

    Science.gov (United States)

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  1. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  2. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Science.gov (United States)

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  3. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  4. Deterministic nanoparticle assemblies: from substrate to solution

    International Nuclear Information System (INIS)

    Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J

    2014-01-01

    The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)

  5. Procedures with deterministic risk in interventionist radiology; Procedimientos con riesgo deterministico en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Tort Ausina, Isabel; Ruiz-Cruces, Rafael; Perez Martinez, Manuel; Carrera Magarino, Francisco; Diez de los Rios, Antonio [Universidad de Malaga (Spain). Facultad de Medicina. Grupo de Investigacion en Proteccion Radiologica]. E-mail: rrcmf@uma.es

    2001-07-01

    The determinist effects in interventionist radiology are been from the irradiation in skin. The objective of this work is the estimation of the deterministic risk of the organs exposed in IR procedures. There ware selected four procedures: coated stent in abdominal aorta; shunt carry-digs; embolization of varicocele; mesenteric arteriography with venous returns. They have present maximum values of dose-area product (PDA), and it has considered the doses in organs by means of computer programs (Eff-Dose, PCXMC and OSD). The PDA has been measured with flat ionization chamber (PTW Diamentor M2). Although still few cases exist, are a high value of dose in kidney and testicles, that suppose that recommendations must be applied to avoid high exhibitions, motivating the personnel to change the irradiation fields, to use the collimation and losses rates of dose of x-ray radioscopy. Dispersion between the values of dose of the different programs is observed, which causes that it considers which is indicated of them, although seems that the Eff-Dose could be recommended, based on the Report-262 of the NRPB.

  6. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  7. Deterministic hazard quotients (HQs): Heading down the wrong road

    International Nuclear Information System (INIS)

    Wilde, L.; Hunter, C.; Simpson, J.

    1995-01-01

    The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches

  8. Quantization of the minimal and non-minimal vector field in curved space

    OpenAIRE

    Toms, David J.

    2015-01-01

    The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic...

  9. On Notions of Security for Deterministic Encryption, and Efficient Constructions Without Random Oracles

    NARCIS (Netherlands)

    S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner

    2008-01-01

    textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic

  10. Phase conjugation with random fields and with deterministic and random scatterers

    International Nuclear Information System (INIS)

    Gbur, G.; Wolf, E.

    1999-01-01

    The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. copyright 1999 Optical Society of America

  11. Minimal modification to tribimaximal mixing

    International Nuclear Information System (INIS)

    He Xiaogang; Zee, A.

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements V α2 of the second column in the mixing matrix equal to 1/√(3). Modifications by keeping one of the columns or one of the rows unchanged from tribimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K and MINOS collaborations have recently reported indications of a nonzero θ 13 . For the cases we consider, the new data sharply constrain the CP violating phase angle δ, with δ close to 0 (in some cases) and π disfavored.

  12. Comparison of Non-overlapping and Overlapping Local/Global Iteration Schemes for Whole-Core Deterministic Transport Calculation

    International Nuclear Information System (INIS)

    Yuk, Seung Su; Cho, Bumhee; Cho, Nam Zin

    2013-01-01

    In the case of deterministic transport model, fixed-k problem formulation is necessary and the overlapping local domain is chosen. However, as mentioned in, the partial current-based Coarse Mesh Finite Difference (p-CMFD) procedure enables also non-overlapping local/global (NLG) iteration. In this paper, NLG iteration is combined with p-CMFD and with CMFD (augmented with a concept of p-CMFD), respectively, and compared to OLG iteration on a 2-D test problem. Non-overlapping local/global iteration with p-CMFD and CMFD global calculation is introduced and tested on a 2-D deterministic transport problem. The modified C5G7 problem is analyzed with both NLG and OLG methods and the solutions converge to the reference solution except for some cases of NLG with CMFD. NLG with CMFD gives the best performance if the solution converges. But if fission-source iteration in local calculation is not enough, it is prone to diverge. The p-CMFD global solver gives unconditional convergence (for both OLG and NLG). A study of switching scheme is in progress, where NLG/p-CMFD is used as 'starter' and then switched to NLG/CMFD to render the whole-core transport calculation more efficient and robust. Parallel computation is another obvious future work

  13. A Numerical Simulation for a Deterministic Compartmental ...

    African Journals Online (AJOL)

    In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...

  14. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with

  15. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  16. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  17. Retroperitoneal abscess after transanal minimally invasive surgery: case report and review of literature

    Directory of Open Access Journals (Sweden)

    Aaron Raney

    2017-10-01

    Full Text Available Abscesses are a rare complication of transanal minimally invasive surgery and transanal endoscopic micro surgery. Reported cases have been in the rectal and pre-sacral areas and have been managed with either antibiotics alone or in conjunction with laparotomy and diverting colostomy. We report a case of a large retroperitoneal abscess following a Transanal minimally invasive surgery full thickness rectal polyp excision. The patient was successfully managed conservatively with antibiotics and a percutaneous drain. Retroperitoneal infection should be included in a differential diagnosis following a Transanal minimally invasive surgery procedure as the presentation can be insidious and timely intervention is needed to prevent further morbidity. Resumo: Os abscessos são uma complicação rara da cirurgia de ressecção transanal minimamente invasiva (TAMIS e da micro cirurgia endoscópica transanal (TEMS. Os casos notificados foram nas áreas rectal e pré-sacral e foram administrados com antibióticos isoladamente ou em conjunto com laparotomia e desvio de colostomia. Relatamos um caso de grande abscesso retroperitoneal após uma excisão de pólipo retal de espessura total TAMIS. O paciente foi tratado com sucesso com a administração de antibióticos e drenagem percutânea. Para prevenir mais morbidade é necessária incluir a infecção retroperitoneal no diagnostico diferencial após um procedimento TAMIS onde a apresentação pode ser insidiosa e a intervenção atempada. Keywords: Colorectal surgery, Transanal minimally invasive surgery (TAMIS, Retroperitoneal abscess, Natural orifice transluminal endoscopic surgery (NOTES, Single-site laparoscopic surgery (SILS, Surgical oncology, Palavras-chave: Cirurgia colorretal, Cirurgia de ressecção transanal minimamente invasiva (TAMIS, Abscesso retroperitoneal, Cirurgia endoscópica transluminal de orifício natural (NOTES, Cirurgia laparoscópica de único local (SILS, Oncologia cirúrgica

  18. Are deterministic methods suitable for short term reserve planning?

    International Nuclear Information System (INIS)

    Voorspools, Kris R.; D'haeseleer, William D.

    2005-01-01

    Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods

  19. The Relation between Deterministic Thinking and Mental Health among Substance Abusers Involved in a Rehabilitation Program

    Directory of Open Access Journals (Sweden)

    Seyed Jalal Younesi

    2015-06-01

    Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of  cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.

  20. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-01-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer

  1. One-step deterministic multipartite entanglement purification with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)

    2012-01-09

    We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.

  2. Is non-minimal inflation eternal?

    International Nuclear Information System (INIS)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  3. Safety margins in deterministic safety analysis

    International Nuclear Information System (INIS)

    Viktorov, A.

    2011-01-01

    The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)

  4. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  5. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    Science.gov (United States)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  6. Levy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    International Nuclear Information System (INIS)

    Boyer, D; Miramontes, O; Larralde, H

    2009-01-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ∼ k -β , in some range of the exponent β, the foraging medium induces movements that are similar to Levy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  7. A deterministic approach for performance assessment and optimization of power distribution units in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.

    2009-01-01

    This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran

  8. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  9. Applicability of deterministic methods in seismic site effects modeling

    International Nuclear Information System (INIS)

    Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.

    2005-01-01

    The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)

  10. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  11. Line and lattice networks under deterministic interference models

    NARCIS (Netherlands)

    Goseling, Jasper; Gastpar, Michael; Weber, Jos H.

    Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of

  12. Deterministic and stochastic transport theories for the analysis of complex nuclear systems

    International Nuclear Information System (INIS)

    Giffard, F.X.

    2000-01-01

    In the field of reactor and fuel cycle physics, particle transport plays an important role. Neutronic design, operation and evaluation calculations of nuclear systems make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very sensitive to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  13. Developments based on stochastic and determinist methods for studying complex nuclear systems

    International Nuclear Information System (INIS)

    Giffard, F.X.

    2000-01-01

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  14. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    Science.gov (United States)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  15. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  16. Study on deterministic response time design for a class of nuclear Instrumentation and Control systems

    International Nuclear Information System (INIS)

    Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long

    2012-01-01

    Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.

  17. The detection and stabilisation of limit cycle for deterministic finite automata

    Science.gov (United States)

    Han, Xiaoguang; Chen, Zengqiang; Liu, Zhongxin; Zhang, Qing

    2018-04-01

    In this paper, the topological structure properties of deterministic finite automata (DFA), under the framework of the semi-tensor product of matrices, are investigated. First, the dynamics of DFA are converted into a new algebraic form as a discrete-time linear system by means of Boolean algebra. Using this algebraic description, the approach of calculating the limit cycles of different lengths is given. Second, we present two fundamental concepts, namely, domain of attraction of limit cycle and prereachability set. Based on the prereachability set, an explicit solution of calculating domain of attraction of a limit cycle is completely characterised. Third, we define the globally attractive limit cycle, and then the necessary and sufficient condition for verifying whether all state trajectories of a DFA enter a given limit cycle in a finite number of transitions is given. Fourth, the problem of whether a DFA can be stabilised to a limit cycle by the state feedback controller is discussed. Criteria for limit cycle-stabilisation are established. All state feedback controllers which implement the minimal length trajectories from each state to the limit cycle are obtained by using the proposed algorithm. Finally, an illustrative example is presented to show the theoretical results.

  18. Deterministic Brownian motion generated from differential delay equations.

    Science.gov (United States)

    Lei, Jinzhi; Mackey, Michael C

    2011-10-01

    This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.

  19. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  20. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  1. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  2. A Deterministic Annealing Approach to Clustering AIRS Data

    Science.gov (United States)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  3. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  4. Minimally invasive treatment of pilon fractures with a low profile plate: preliminary results in 17 cases

    NARCIS (Netherlands)

    Borens, Olivier; Kloen, Peter; Richmond, Jeffrey; Roederer, Goetz; Levine, David S.; Helfet, David L.

    2009-01-01

    To determine the results of "biologic fixation" with a minimally invasive plating technique using a newly designed low profile "Scallop" plate in the treatment of pilon fractures. Retrospective case series. A tertiary referral center. Seventeen patients were treated between 1999 and 2001 for a

  5. Deterministic chaos at the ocean surface: applications and interpretations

    Directory of Open Access Journals (Sweden)

    A. J. Palmer

    1998-01-01

    Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.

  6. Empirical and deterministic accuracies of across-population genomic prediction

    NARCIS (Netherlands)

    Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.

    2015-01-01

    Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which

  7. Automatic design of deterministic and non-halting membrane systems by tuning syntactical ingredients.

    Science.gov (United States)

    Zhang, Gexiang; Rong, Haina; Ou, Zhu; Pérez-Jiménez, Mario J; Gheorghe, Marian

    2014-09-01

    To solve the programmability issue of membrane computing models, the automatic design of membrane systems is a newly initiated and promising research direction. In this paper, we propose an automatic design method, Permutation Penalty Genetic Algorithm (PPGA), for a deterministic and non-halting membrane system by tuning membrane structures, initial objects and evolution rules. The main ideas of PPGA are the introduction of the permutation encoding technique for a membrane system, a penalty function evaluation approach for a candidate membrane system and a genetic algorithm for evolving a population of membrane systems toward a successful one fulfilling a given computational task. Experimental results show that PPGA can successfully accomplish the automatic design of a cell-like membrane system for computing the square of n ( n ≥ 1 is a natural number) and can find the minimal membrane systems with respect to their membrane structures, alphabet, initial objects, and evolution rules for fulfilling the given task. We also provide the guidelines on how to set the parameters of PPGA.

  8. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  9. Deterministic dense coding and entanglement entropy

    International Nuclear Information System (INIS)

    Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.

    2008-01-01

    We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1

  10. Mixed motion in deterministic ratchets due to anisotropic permeability

    NARCIS (Netherlands)

    Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.

    2011-01-01

    Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle

  11. Deterministic blade row interactions in a centrifugal compressor stage

    Science.gov (United States)

    Kirtley, K. R.; Beach, T. A.

    1991-01-01

    The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.

  12. A study of deterministic models for quantum mechanics

    International Nuclear Information System (INIS)

    Sutherland, R.

    1980-01-01

    A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement

  13. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system...

  14. Strong Sector in non-minimal SUSY model

    Directory of Open Access Journals (Sweden)

    Costantini Antonio

    2016-01-01

    Full Text Available We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.

  15. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  16. Deterministic Chaos - Complex Chance out of Simple Necessity ...

    Indian Academy of Sciences (India)

    This is a very lucid and lively book on deterministic chaos. Chaos is very common in nature. However, the understanding and realisation of its potential applications is very recent. Thus this book is a timely addition to the subject. There are several books on chaos and several more are being added every day. In spite of this ...

  17. Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Niels Jacob

    1994-01-01

    An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...

  18. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  19. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  20. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    Science.gov (United States)

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more

  1. Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented

  2. Deterministic and Stochastic Study of Wind Farm Harmonic Currents

    DEFF Research Database (Denmark)

    Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...

  3. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2017-01-01

    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...

  4. Deterministic teleportation using single-photon entanglement as a resource

    DEFF Research Database (Denmark)

    Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.

    2012-01-01

    We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...

  5. Deterministic entanglement purification and complete nonlocal Bell-state analysis with hyperentanglement

    International Nuclear Information System (INIS)

    Sheng Yubo; Deng Fuguo

    2010-01-01

    Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.

  6. A deterministic-probabilistic model for contaminant transport. User manual

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, F W; Crowe, A

    1980-08-01

    This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)

  7. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  8. Deterministic thermostats, theories of nonequilibrium systems and parallels with the ergodic condition

    International Nuclear Information System (INIS)

    Jepps, Owen G; Rondoni, Lamberto

    2010-01-01

    Deterministic 'thermostats' are mathematical tools used to model nonequilibrium steady states of fluids. The resulting dynamical systems correctly represent the transport properties of these fluids and are easily simulated on modern computers. More recently, the connection between such thermostats and entropy production has been exploited in the development of nonequilibrium fluid theories. The purpose and limitations of deterministic thermostats are discussed in the context of irreversible thermodynamics and the development of theories of nonequilibrium phenomena. We draw parallels between the development of such nonequilibrium theories and the development of notions of ergodicity in equilibrium theories. (topical review)

  9. Deterministic linear-optics quantum computing based on a hybrid approach

    International Nuclear Information System (INIS)

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-01-01

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources

  10. Deterministic linear-optics quantum computing based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  11. Nail gun injuries to the head with minimal neurological consequences: a case series.

    Science.gov (United States)

    Makoshi, Ziyad; AlKherayf, Fahad; Da Silva, Vasco; Lesiuk, Howard

    2016-03-16

    An estimated 3700 individuals are seen annually in US emergency departments for nail gun-related injuries. Approximately 45 cases have been reported in the literature concerning nail gun injuries penetrating the cranium. These cases pose a challenge for the neurosurgeon because of the uniqueness of each case, the dynamics of high pressure nail gun injuries, and the surgical planning to remove the foreign body without further vascular injury or uncontrolled intracranial hemorrhage. Here we present four cases of penetrating nail gun injuries with variable presentations. Case 1 is of a 33-year-old white man who sustained 10 nail gunshot injuries to his head. Case 2 is of a 51-year-old white man who sustained bi-temporal nail gun injuries to his head. Cases 3 and 4 are of two white men aged 22 years and 49 years with a single nail gun injury to the head. In the context of these individual cases and a review of similar cases in the literature we present surgical approaches and considerations in the management of nail gun injuries to the cranium. Case 1 presented with cranial nerve deficits, Case 2 required intubation for low Glasgow Coma Scale, while Cases 3 and 4 were neurologically intact on presentation. Three patients underwent angiography for assessment of vascular injury and all patients underwent surgical removal of foreign objects using a vice-grip. No neurological deficits were found in these patients on follow-up. Nail gun injuries can present with variable clinical status; mortality and morbidity is low for surgically managed isolated nail gun-related injuries to the head. The current case series describes the surgical use of a vice-grip for a good grip of the nail head and controlled extraction, and these patients appear to have a good postoperative prognosis with minimal neurological deficits postoperatively and on follow-up.

  12. A three-arm (laparoscopic, hand-assisted, and robotic) matched-case analysis of intraoperative and postoperative outcomes in minimally invasive colorectal surgery.

    Science.gov (United States)

    Patel, Chirag B; Ragupathi, Madhu; Ramos-Valadez, Diego I; Haas, Eric M

    2011-02-01

    Robotic-assisted laparoscopic surgery is an emerging modality in the field of minimally invasive colorectal surgery. However, there is a dearth of data comparing outcomes with other minimally invasive techniques. We present a 3-arm (conventional, hand-assisted, and robotic) matched-case analysis of intraoperative and short-term outcomes in patients undergoing minimally invasive colorectal procedures. Between August 2008 and October 2009, 70 robotic cases of the rectum and rectosigmoid were performed. Thirty of these were organized into triplets with conventional and hand-assisted cases based on the following 6 matching criteria: 1) surgeon; 2) sex; 3) body mass index; 4) operative procedure; 5) pathology; and 6) history of neoadjuvant therapy in malignant cases. Demographics, intraoperative parameters, and postoperative outcomes were assessed. Pathological outcomes were analyzed in malignant cases. Data were stratified by postoperative diagnosis and operative procedure. There was no significant difference in intraoperative complications, estimated blood loss (126.1 ± 98.5 mL overall), or postoperative morbidity and mortality among the groups. Robotic technique required longer operative time compared with conventional laparoscopic (P hand-assisted (P robotic approach results in short-term outcomes comparable to conventional and hand-assisted laparoscopic approaches for benign and malignant diseases of the rectum and rectosigmoid. With 3-dimensional visualization, additional freedom of motion, and improved ergonomics, this enabling technology may play an important role when performing colorectal procedures involving the pelvic anatomy.

  13. Hazardous waste minimization

    International Nuclear Information System (INIS)

    Freeman, H.

    1990-01-01

    This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

  14. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  15. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  16. Convergence studies of deterministic methods for LWR explicit reflector methodology

    International Nuclear Information System (INIS)

    Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.

    2013-01-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)

  17. A Deterministic Safety Assessment of a Pyro-processed Waste Repository

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae; Choi, Jong Won

    2012-01-01

    A GoldSim template program for a safety assessment of a hybrid-typed repository system, called 'A-KRS', in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been deterministically assessed with 5 various normal and abnormal scenarios associated with nuclide release and transport in and around the repository. Dose exposure rates to the farming exposure group have been evaluated in accordance with all the scenarios and then compared among other.

  18. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated...

  19. Strongly Deterministic Population Dynamics in Closed Microbial Communities

    Directory of Open Access Journals (Sweden)

    Zak Frentz

    2015-10-01

    Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.

  20. Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing

    DEFF Research Database (Denmark)

    Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina

    2016-01-01

    Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...

  1. Transmission power control in WSNs : from deterministic to cognitive methods

    NARCIS (Netherlands)

    Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.

    2018-01-01

    Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms

  2. Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, S.L.; Davis, P

    2002-04-01

    The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs

  3. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... to the initialization. Its asymptotic performance does not reach the DML performance though. The second strategy, called Pseudo-Quadratic ML (PQML), is naturally denoised. The denoising in PQML is furthermore more efficient than in DIQML: PQML yields the same asymptotic performance as DML, as opposed to DIQML......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...

  4. A mathematical theory for deterministic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)

    2007-05-15

    Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.

  5. Diffusion in Deterministic Interacting Lattice Systems

    Science.gov (United States)

    Medenjak, Marko; Klobas, Katja; Prosen, Tomaž

    2017-09-01

    We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.

  6. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  7. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  8. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    Science.gov (United States)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  9. Method to deterministically study photonic nanostructures in different experimental instruments

    NARCIS (Netherlands)

    Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the

  10. Langevin equation with the deterministic algebraically correlated noise

    Energy Technology Data Exchange (ETDEWEB)

    Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)

    1995-12-31

    Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.

  11. Langevin equation with the deterministic algebraically correlated noise

    International Nuclear Information System (INIS)

    Ploszajczak, M.; Srokowski, T.

    1995-01-01

    Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)

  12. Chaotic transitions in deterministic and stochastic dynamical systems applications of Melnikov processes in engineering, physics, and neuroscience

    CERN Document Server

    Simiu, Emil

    2002-01-01

    The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...

  13. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    International Nuclear Information System (INIS)

    Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad

    2016-01-01

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  14. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.

    2016-12-15

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  15. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  16. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A.F.; Roussin, R.W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  17. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A. F.; Roussin, R. W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  18. Beeping a Deterministic Time-Optimal Leader Election

    OpenAIRE

    Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy

    2018-01-01

    The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...

  19. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    Science.gov (United States)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch

  20. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we

  1. Power Minimization techniques for Networked Data Centers

    International Nuclear Information System (INIS)

    Low, Steven; Tang, Kevin

    2011-01-01

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  2. Minimally-aggressive gestational trophoblastic neoplasms.

    Science.gov (United States)

    Cole, Laurence A

    2012-04-01

    We have previously defined a new syndrome "Minimally-aggressive gestational trophoblastic neoplasms" in which choriocarcinoma or persistent hydatidiform mole has a minimal growth rate and becomes chemorefractory. Previously we described a new treatment protocol, waiting for hCG rise to >3000 mIU/ml and disease becomes more advanced, then using combination chemotherapy. Initially we found this treatment successful in 8 of 8 cases, here we find this protocol appropriate in a further 16 cases. Initially we used hyperglycosylated hCG, a limited availability test, to identify this syndrome. Here we propose also using hCG doubling rate to detect this syndrome. Minimally aggressive gestational trophoblastic disease can be detected by chemotherapy resistance or low hyperglycosylated hCG, disease by hyperglycosylated hCG and by hCG doubling test. All were recommended to hold off further chemotherapy until hCG >3000mIU/ml. One case died prior to the start of the study, one case withdrew because of a lung nodule and one withdrew refusing the suggested combination chemotherapy. The remaining 16 women were all successfully treated. A total of 8 plus 16 or 24 of 24 women were successfully treated using the proposed protocol, holding back on chemotherapy until hCG >3000mIU/ml. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    2015-01-01

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......–Carter model will otherwise overestimate the reduction of mortality for the younger age groups and will underestimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee–Carter model instead of a one-factor model should be formulated as a two- (or several...

  4. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    Science.gov (United States)

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.; Severino, F.

    2011-01-01

    A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.

  6. SU-F-T-347: An Absolute Dose-Volume Constraint Based Deterministic Optimization Framework for Multi-Co60 Source Focused Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Liang, B; Liu, B; Li, Y; Guo, B; Xu, X; Wei, R; Zhou, F [Beihang University, Beijing, Beijing (China); Wu, Q [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descent technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.

  7. Deterministic quantum state transfer and remote entanglement using microwave photons.

    Science.gov (United States)

    Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A

    2018-06-01

    Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.

  8. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  9. Novel management of distal tibial and fibular fractures with Acumed fibular nail and minimally invasive plating osteosynthesis technique: A case report.

    Science.gov (United States)

    Wang, Tie-Jun; Ju, Wei-Na; Qi, Bao-Chang

    2017-03-01

    Anatomical characteristics, such as subcutaneous position and minimal muscle cover, contribute to the complexity of fractures of the distal third of the tibia and fibula. Severe damage to soft tissue and instability ensure high risk of delayed bone union and wound complications such as nonunion, infection, and necrosis. This case report discusses management in a 54-year-old woman who sustained fractures of the distal third of the left tibia and fibula, with damage to overlying soft tissue (swelling and blisters). Plating is accepted as the first choice for this type of fracture as it ensures accurate reduction and rigid fixation, but it increases the risk of complications. Closed fracture of the distal third of the left tibia and fibula (AO: 43-A3). After the swelling was alleviated, the patient underwent closed reduction and fixation with an Acumed fibular nail and minimally invasive plating osteosynthesis (MIPO), ensuring a smaller incision and minimal soft-tissue dissection. At the 1-year follow-up, the patient had recovered well and had regained satisfactory function in the treated limb. The Kofoed score of the left ankle was 95. Based on the experience from this case, the operation can be undertaken safely when the swelling has been alleviated. The minimal invasive technique represents the best approach. Considering the merits and good outcome in this case, we recommend the Acumed fibular nail and MIPO technique for treatment of distal tibial and fibular fractures.

  10. Deterministic transfer of two-dimensional materials by all-dry viscoelastic stamping

    International Nuclear Information System (INIS)

    Castellanos-Gomez, Andres; Buscema, Michele; Molenaar, Rianda; Singh, Vibhor; Janssen, Laurens; Van der Zant, Herre S J; Steele, Gary A

    2014-01-01

    The deterministic transfer of two-dimensional crystals constitutes a crucial step towards the fabrication of heterostructures based on the artificial stacking of two-dimensional materials. Moreover, controlling the positioning of two-dimensional crystals facilitates their integration in complex devices, which enables the exploration of novel applications and the discovery of new phenomena in these materials. To date, deterministic transfer methods rely on the use of sacrificial polymer layers and wet chemistry to some extent. Here, we develop an all-dry transfer method that relies on viscoelastic stamps and does not employ any wet chemistry step. This is found to be very advantageous to freely suspend these materials as there are no capillary forces involved in the process. Moreover, the whole fabrication process is quick, efficient, clean and it can be performed with high yield. (letter)

  11. Deterministic and probabilistic approach to safety analysis

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1980-01-01

    The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)

  12. A Deterministic Approach to the Synchronization of Cellular Automata

    OpenAIRE

    Garcia, J.; Garcia, P.

    2011-01-01

    In this work we introduce a deterministic scheme of synchronization of linear and nonlinear cellular automata (CA) with complex behavior, connected through a master-slave coupling. By using a definition of Boolean derivative, we use the linear approximation of the automata to determine a function of coupling that promotes synchronization without perturbing all the sites of the slave system.

  13. Nonlinear deterministic structures and the randomness of protein sequences

    CERN Document Server

    Huang Yan Zhao

    2003-01-01

    To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.

  14. Calculating the effective delayed neutron fraction in the Molten Salt Fast Reactor: Analytical, deterministic and Monte Carlo approaches

    International Nuclear Information System (INIS)

    Aufiero, Manuele; Brovchenko, Mariya; Cammi, Antonio; Clifford, Ivor; Geoffroy, Olivier; Heuer, Daniel; Laureau, Axel; Losa, Mario; Luzzi, Lelio; Merle-Lucotte, Elsa; Ricotti, Marco E.; Rouch, Hervé

    2014-01-01

    Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for β eff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (β eff ) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions β eff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed

  15. Non-minimal inflation revisited

    International Nuclear Information System (INIS)

    Nozari, Kourosh; Shafizadeh, Somayeh

    2010-01-01

    We reconsider an inflationary model that inflaton field is non-minimally coupled to gravity. We study the parameter space of the model up to the second (and in some cases third) order of the slow-roll parameters. We calculate inflation parameters in both Jordan and Einstein frames, and the results are compared in these two frames and also with observations. Using the recent observational data from combined WMAP5+SDSS+SNIa datasets, we study constraints imposed on our model parameters, especially the non-minimal coupling ξ.

  16. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  17. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  18. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  19. Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations

    Science.gov (United States)

    Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael

    2012-02-01

    We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations

  20. Implemented state automorphisms within the logico-algebraic approach to deterministic mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Barone, F [Naples Univ. (Italy). Ist. di Matematica della Facolta di Scienze

    1981-01-31

    The new notion of S/sub 1/-implemented state automorphism is introduced and characterized in quantum logic. Implemented pure state automorphisms are then characterized in deterministic mechanics as automorphisms of the Borel structure on the phase space.

  1. Deterministic computation of functional integrals

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  2. Towards the certification of non-deterministic control systems for safety-critical applications: analysing aviation analogies for possible certification strategies

    CSIR Research Space (South Africa)

    Burger, CR

    2011-11-01

    Full Text Available Current certification criteria for safety-critical systems exclude non-deterministic control systems. This paper investigates the feasibility of using human-like monitoring strategies to achieve safe non-deterministic control using multiple...

  3. Local Risk-Minimization for Defaultable Claims with Recovery Process

    International Nuclear Information System (INIS)

    Biagini, Francesca; Cretarola, Alessandra

    2012-01-01

    We study the local risk-minimization approach for defaultable claims with random recovery at default time, seen as payment streams on the random interval [0,τ∧T], where T denotes the fixed time-horizon. We find the pseudo-locally risk-minimizing strategy in the case when the agent information takes into account the possibility of a default event (local risk-minimization with G-strategies) and we provide an application in the case of a corporate bond. We also discuss the problem of finding a pseudo-locally risk-minimizing strategy if we suppose the agent obtains her information only by observing the non-defaultable assets.

  4. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Jinaphanh, A.

    2012-01-01

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  5. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  6. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  7. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  8. Siting criteria based on the prevention of deterministic effects from plutonium inhalation exposures

    International Nuclear Information System (INIS)

    Sorensen, S.A.; Low, J.O.

    1998-01-01

    Siting criteria are established by regulatory authorities to evaluate potential accident scenarios associated with proposed nuclear facilities. The 0.25 Sv (25 rem) siting criteria adopted in the United States has been historically based on the prevention of deterministic effects from acute, whole-body exposures. The Department of Energy has extended the applicability of this criterion to radionuclides that deliver chronic, organ-specific irradiation through the specification of a 0.25 Sv (25 rem) committed effective dose equivalent siting criterion. A methodology is developed to determine siting criteria based on the prevention of deterministic effects from inhalation intakes of radionuclides which deliver chronic, organ-specific irradiation. Revised siting criteria, expressed in terms of committed effective dose equivalent, are proposed for nuclear facilities that handle primarily plutonium compounds. The analysis determined that a siting criterion of 1.2 Sv (120 rem) committed effective dose equivalent for inhalation exposures to weapons-grade plutonium meets the historical goal of preventing deterministic effects during a facility accident scenario. The criterion also meets the Nuclear Regulatory Commission and Department of Energy Nuclear Safety Goals provided that the frequency of the accident is sufficiently low

  9. Quantitative diffusion tensor deterministic and probabilistic fiber tractography in relapsing-remitting multiple sclerosis

    International Nuclear Information System (INIS)

    Hu Bing; Ye Binbin; Yang Yang; Zhu Kangshun; Kang Zhuang; Kuang Sichi; Luo Lin; Shan Hong

    2011-01-01

    Purpose: Our aim was to study the quantitative fiber tractography variations and patterns in patients with relapsing-remitting multiple sclerosis (RRMS) and to assess the correlation between quantitative fiber tractography and Expanded Disability Status Scale (EDSS). Material and methods: Twenty-eight patients with RRMS and 28 age-matched healthy volunteers underwent a diffusion tensor MR imaging study. Quantitative deterministic and probabilistic fiber tractography were generated in all subjects. And mean numbers of tracked lines and fiber density were counted. Paired-samples t tests were used to compare tracked lines and fiber density in RRMS patients with those in controls. Bivariate linear regression model was used to determine the relationship between quantitative fiber tractography and EDSS in RRMS. Results: Both deterministic and probabilistic tractography's tracked lines and fiber density in RRMS patients were less than those in controls (P < .001). Both deterministic and probabilistic tractography's tracked lines and fiber density were found negative correlations with EDSS in RRMS (P < .001). The fiber tract disruptions and reductions in RRMS were directly visualized on fiber tractography. Conclusion: Changes of white matter tracts can be detected by quantitative diffusion tensor fiber tractography, and correlate with clinical impairment in RRMS.

  10. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  11. A continuous variable quantum deterministic key distribution based on two-mode squeezed states

    International Nuclear Information System (INIS)

    Gong, Li-Hua; Song, Han-Chong; Liu, Ye; Zhou, Nan-Run; He, Chao-Sheng

    2014-01-01

    The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency. (paper)

  12. Evaluation of select trade-offs between ground-water remediation and waste minimization for petroleum refining industry

    International Nuclear Information System (INIS)

    Andrews, C.D.; McTernan, W.F.; Willett, K.K.

    1996-01-01

    An investigation comparing environmental remediation alternatives and attendant costs for a hypothetical refinery site located in the Arkansas River alluvium was completed. Transport from the land's surface to and through the ground water of three spill sizes was simulated, representing a base case and two possible levels of waste minimization. Remediation costs were calculated for five alternative remediation options, for three possible regulatory levels and alternative site locations, for four levels of technology improvement, and for eight different years. It is appropriate from environmental and economic perspectives to initiate significant efforts and expenditures that are necessary to minimize the amount and type of waste produced and disposed during refinery operations; or conversely, given expected improvements in technology, is it better to wait until remediation technologies improve, allowing greater environmental compliance at lower costs? The present work used deterministic models to track a light nonaqueous phase liquid (LNAPL) spill through the unsaturated zone to the top of the water table. Benzene leaching from LNAPL to the ground water was further routed through the alluvial aquifer. Contaminant plumes were simulated over 50 yr of transport and remediation costs assigned for each of the five treatment options for each of these years. The results of these efforts show that active remediation is most cost effective after a set point or geochemical quasi-equilibrium is reached, where long-term improvements in technology greatly tilt the recommended option toward remediation. Finally, the impacts associated with increasingly rigorous regulatory levels present potentially significant penalties for the remediation option, but their likelihood of occurrence is difficult to define

  13. Contribution of the deterministic approach to the characterization of seismic input

    International Nuclear Information System (INIS)

    Panza, G.F.; Romanelli, F.; Vaccari, F.; Decanini, L.; Mollaioli, F.

    1999-10-01

    Traditional methods use either a deterministic or a probabilistic approach, based on empirically derived laws for ground motion attenuation. The realistic definition of seismic input can be performed by means of advanced modelling codes based on the modal summation technique. These codes and their extension to laterally heterogeneous structures allow us to accurately calculate synthetic signals, complete of body waves and of surface waves, corresponding to different source and anelastic structural models, taking into account the effect of local geological conditions. This deterministic approach is capable to address some aspects largely overlooked in the probabilistic approach: (a) the effect of crustal properties on attenuation are not neglected; (b) the ground motion parameters are derived from synthetic time histories. and not from overly simplified attenuation functions; (c) the resulting maps are in terms of design parameters directly, and do not require the adaptation of probabilistic maps to design ground motions; and (d) such maps address the issue of the deterministic definition of ground motion in a way which permits the generalization of design parameters to locations where there is little seismic history. The methodology has been applied to a large part of south-eastern Europe, in the framework of the EU-COPERNICUS project 'Quantitative Seismic Zoning of the Circum Pannonian Region'. Maps of various seismic hazard parameters numerically modelled, and whenever possible tested against observations, such as peak ground displacement, velocity and acceleration, of practical use for the design of earthquake-safe structures, have been produced. The results of a standard probabilistic approach are compared with the findings based on the deterministic approach. A good agreement is obtained except for the Vrancea (Romania) zone, where the attenuation relations used in the probabilistic approach seem to underestimate, mainly at large distances, the seismic hazard

  14. System-Enforced Deterministic Streaming for Efficient Pipeline Parallelism

    Institute of Scientific and Technical Information of China (English)

    张昱; 李兆鹏; 曹慧芳

    2015-01-01

    Pipeline parallelism is a popular parallel programming pattern for emerging applications. However, program-ming pipelines directly on conventional multithreaded shared memory is difficult and error-prone. We present DStream, a C library that provides high-level abstractions of deterministic threads and streams for simply representing pipeline stage work-ers and their communications. The deterministic stream is established atop our proposed single-producer/multi-consumer (SPMC) virtual memory, which integrates synchronization with the virtual memory model to enforce determinism on shared memory accesses. We investigate various strategies on how to efficiently implement DStream atop the SPMC memory, so that an infinite sequence of data items can be asynchronously published (fixed) and asynchronously consumed in order among adjacent stage workers. We have successfully transformed two representative pipeline applications – ferret and dedup using DStream, and conclude conversion rules. An empirical evaluation shows that the converted ferret performed on par with its Pthreads and TBB counterparts in term of running time, while the converted dedup is close to 2.56X, 7.05X faster than the Pthreads counterpart and 1.06X, 3.9X faster than the TBB counterpart on 16 and 32 CPUs, respectively.

  15. Transience and capacity of minimal submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We prove explicit lower bounds for the capacity of annular domains of minimal submanifolds P-m in ambient Riemannian spaces N-n with sectional curvatures bounded from above. We characterize the situations in which the lower bounds for the capacity are actually attained. Furthermore we apply...... these bounds to prove that Brownian motion defined on a complete minimal submanifold is transient when the ambient space is a negatively curved Hadamard-Cartan manifold. The proof stems directly from the capacity bounds and also covers the case of minimal submanifolds of dimension m > 2 in Euclidean spaces....

  16. Transmission cost minimization strategies for wind-electric generating facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, R. [Northern States Power Company, Minneapolis, MN (United States)

    1997-12-31

    Integrating wind-electric generation facilities into existing power systems presents opportunities not encountered in conventional energy projects. Minimizing outlet cost requires probabilistic value-based analyses appropriately reflecting the wind facility`s operational characteristics. The wind resource`s intermittent nature permits relaxation of deterministic criteria addressing outlet configuration and capacity required relative to facility rating. Equivalent capacity ratings of wind generation facilities being a fraction of installed nameplate rating, outlet design studies contingency analyses can concentrate on this fractional value. Further, given its non-dispatchable, low capacity factor nature, a lower level of redundancy in outlet facilities is appropriate considering the trifling contribution to output unreliability. Further cost reduction opportunities arise from {open_quotes}wind speed/generator power output{close_quotes} and {open_quotes}wind speed/overhead conductor rating{close_quotes} functions` correlation. Proper analysis permits the correlation`s exploitation to safely increase line ratings. Lastly, poor correlation between output and utility load may permit use of smaller conductors, whose higher (mostly off-peak) losses are economically justifiable.

  17. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    Science.gov (United States)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  18. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    Science.gov (United States)

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  19. Deterministic Model for Rubber-Metal Contact Including the Interaction Between Asperities

    NARCIS (Netherlands)

    Deladi, E.L.; de Rooij, M.B.; Schipper, D.J.

    2005-01-01

    Rubber-metal contact involves relatively large deformations and large real contact areas compared to metal-metal contact. Here, a deterministic model is proposed for the contact between rubber and metal surfaces, which takes into account the interaction between neighboring asperities. In this model,

  20. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    Science.gov (United States)

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  1. Deterministic integer multiple firing depending on initial state in Wang model

    Energy Technology Data Exchange (ETDEWEB)

    Xie Yong [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: yxie@mail.xjtu.edu.cn; Xu Jianxue [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang Jun [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)

    2006-12-15

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables.

  2. Deterministic integer multiple firing depending on initial state in Wang model

    International Nuclear Information System (INIS)

    Xie Yong; Xu Jianxue; Jiang Jun

    2006-01-01

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables

  3. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, R [University of Alberta, Edmonton, AB (Canada); Fallone, B [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada); MagnetTx Oncology Solutions, Edmonton, AB (Canada); St Aubin, J [University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Edmonton, AB (Canada)

    2016-06-15

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. The LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been

  4. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    International Nuclear Information System (INIS)

    Zhu, T; Finlay, J; Mesina, C; Liu, H

    2014-01-01

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium

  5. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    Science.gov (United States)

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  6. Probabilistic approach in treatment of deterministic analyses results of severe accidents

    International Nuclear Information System (INIS)

    Krajnc, B.; Mavko, B.

    1996-01-01

    Severe accidents sequences resulting in loss of the core geometric integrity have been found to have small probability of the occurrence. Because of their potential consequences to public health and safety, an evaluation of the core degradation progression and the resulting effects on the containment is necessary to determine the probability of a significant release of radioactive materials. This requires assessment of many interrelated phenomena including: steel and zircaloy oxidation, steam spikes, in-vessel debris cooling, potential vessel failure mechanisms, release of core material to the containment, containment pressurization from steam generation, or generation of non-condensable gases or hydrogen burn, and ultimately coolability of degraded core material. To asses the answer from the containment event trees in the sense of weather certain phenomenological event would happen or not the plant specific deterministic analyses should be performed. Due to the fact that there is a large uncertainty in the prediction of severe accidents phenomena in Level 2 analyses (containment event trees) the combination of probabilistic and deterministic approach should be used. In fact the result of the deterministic analyses of severe accidents are treated in probabilistic manner due to large uncertainty of results as a consequence of a lack of detailed knowledge. This paper discusses approach used in many IPEs, and which assures that the assigned probability for certain question in the event tree represent the probability that the event will or will not happen and that this probability also includes its uncertainty, which is mainly result of lack of knowledge. (author)

  7. Deterministic ion beam material adding technology for high-precision optical surfaces.

    Science.gov (United States)

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  8. Deterministic dense coding and faithful teleportation with multipartite graph states

    International Nuclear Information System (INIS)

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-01-01

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  9. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    KAUST Repository

    Narayanan, Kiran

    2018-04-19

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξhdeterministic mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1, a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.

  10. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    KAUST Repository

    Narayanan, Kiran; Samtaney, Ravi

    2018-01-01

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξhdeterministic mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1, a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.

  11. A minimal model of burst-noise induced bistability.

    Directory of Open Access Journals (Sweden)

    Johannes Falk

    Full Text Available We investigate the influence of intrinsic noise on stable states of a one-dimensional dynamical system that shows in its deterministic version a saddle-node bifurcation between monostable and bistable behaviour. The system is a modified version of the Schlögl model, which is a chemical reaction system with only one type of molecule. The strength of the intrinsic noise is varied without changing the deterministic description by introducing bursts in the autocatalytic production step. We study the transitions between monostable and bistable behavior in this system by evaluating the number of maxima of the stationary probability distribution. We find that changing the size of bursts can destroy and even induce saddle-node bifurcations. This means that a bursty production of molecules can qualitatively change the dynamics of a chemical reaction system even when the deterministic description remains unchanged.

  12. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  13. Assessment of fusion facility dose rate map using mesh adaptivity enhancements of hybrid Monte Carlo/deterministic techniques

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.

    2014-01-01

    Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer

  14. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  15. Deterministic and Probabilistic Serviceability Assessment of Footbridge Vibrations due to a Single Walker Crossing

    Directory of Open Access Journals (Sweden)

    Cristoforo Demartino

    2018-01-01

    Full Text Available This paper presents a numerical study on the deterministic and probabilistic serviceability assessment of footbridge vibrations due to a single walker crossing. The dynamic response of the footbridge is analyzed by means of modal analysis, considering only the first lateral and vertical modes. Single span footbridges with uniform mass distribution are considered, with different values of the span length, natural frequencies, mass, and structural damping and with different support conditions. The load induced by a single walker crossing the footbridge is modeled as a moving sinusoidal force either in the lateral or in the vertical direction. The variability of the characteristics of the load induced by walkers is modeled using probability distributions taken from the literature defining a Standard Population of walkers. Deterministic and probabilistic approaches were adopted to assess the peak response. Based on the results of the simulations, deterministic and probabilistic vibration serviceability assessment methods are proposed, not requiring numerical analyses. Finally, an example of the application of the proposed method to a truss steel footbridge is presented. The results highlight the advantages of the probabilistic procedure in terms of reliability quantification.

  16. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  17. Using MCBEND for neutron or gamma-ray deterministic calculations

    Science.gov (United States)

    Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith

    2017-09-01

    MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  18. Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics

    International Nuclear Information System (INIS)

    Maggs, J E; Morales, G J

    2012-01-01

    Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)

  19. 2D deterministic radiation transport with the discontinuous finite element method

    International Nuclear Information System (INIS)

    Kershaw, D.; Harte, J.

    1993-01-01

    This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method

  20. Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code

    International Nuclear Information System (INIS)

    Misfeldt, I.

    1980-01-01

    A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)

  1. Deterministic and fuzzy-based methods to evaluate community resilience

    Science.gov (United States)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  2. Evaluation of Deterministic and Stochastic Components of Traffic Counts

    Directory of Open Access Journals (Sweden)

    Ivan Bošnjak

    2012-10-01

    Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.

  3. A Bayesian perspective on age replacement with minimal repair

    International Nuclear Information System (INIS)

    Sheu, S.-H.; Yeh, R.H.; Lin, Y.-B.; Juang, M.-G.

    1999-01-01

    In this article, a Bayesian approach is developed for determining an optimal age replacement policy with minimal repair. By incorporating minimal repair, planned replacement, and unplanned replacement, the mathematical formulas of the expected cost per unit time are obtained for two cases - the infinite-horizon case and the one-replacement-cycle case. For each case, we show that there exists a unique and finite optimal age for replacement under some reasonable conditions. When the failure density is Weibull with uncertain parameters, a Bayesian approach is established to formally express and update the uncertain parameters for determining an optimal age replacement policy. Further, various special cases are discussed in detail. Finally, a numerical example is given

  4. Human Resources Readiness as TSO for Deterministic Safety Analysis on the First NPP in Indonesia

    International Nuclear Information System (INIS)

    Sony Tjahyani, D. T.

    2010-01-01

    In government regulation no. 43 year 2006 it is mentioned that preliminary safety analysis report and final safety analysis report are one of requirements which should be applied in construction and operation licensing for commercial power reactor (NPPs). The purpose of safety analysis report is to confirm the adequacy and efficiency of provisions within the defence in depth of nuclear reactor. Deterministic analysis is used on the safety analysis report. One of the TSO task is to evaluate this report based on request of operator or regulatory body. This paper discusses about human resources readiness as TSO for deterministic safety analysis on the first NPP in Indonesia. The assessment is done by comparing the analysis step on SS-23 and SS-30 with human resources status of BATAN currently. The assessment results showed that human resources for deterministic safety analysis are ready as TSO especially to review preliminary safety analysis report and to revise final safety analysis report in licensing on the first NPP in Indonesia. Otherwise, to prepare the safety analysis report is still needed many competency human resources. (author)

  5. About the Possibility of Creation of a Deterministic Unified Mechanics

    International Nuclear Information System (INIS)

    Khomyakov, G.K.

    2005-01-01

    The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered

  6. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    Energy Technology Data Exchange (ETDEWEB)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    2016-12-15

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potential composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.

  7. Deterministic Effects of Occupational Exposures in the Mayak Nuclear Workers Cohort

    International Nuclear Information System (INIS)

    Azinova, T. V.; Okladnikova, N. D.; Sumina, M. V.; Pesternikova, V. S.; Osovets, V. S.; Druzhimina, M. B.; Seminikhina, N. g.

    2004-01-01

    A wide spread and utilization of nuclear energy in the recent decade leads to a stable increasing of contingents exposed to ionizing radiation sources. in order to predict radiation risks it's important to have and apply all the experience in assessment of health effects due to radiation exposures generated by now in different countries. the proposed report will present results of the long-term follow-up for a cohort of nuclear workers at the Mayak Production Association, with was the first nuclear facility in Russia. The established system of individual dosimetry of external exposure, monitoring of internal radiation and special system of medical follow-up of healthy nuclear workers during the last 50 years allowed collecting of the unique primary data to study radiation effects, their patterns and mechanisms specific of exposure dose. The study cohort includes 61 percent of males and 39 percent of females. The vital status is known for 90 percent of cases, 44 percent of workers are still alive and undergo regular medical examination in our Clinic. Unfortunately, by now 50 percent of workers have died. 6 percent of workers were lost for the follow-up. total doses from chronic external gamma rays in the cohort ranged from 0.6 to 10.8 Gy (annual exposure doses were from 0.001 to 7.4 Gy), Pu body burden was from 0.3 to 72.3 kBq. Most intensive chronic exposure of workers was registered during 1948 to 1958. At this time, 19 radiation accidents occurred at the Mayak PA. Thus, the highest incidence of deterministic effects was observed right at this period. In the cohort of Mayak nuclear workers there were diagnosed 60 cases of acute radiation syndrome (I to IV degrees of severity); 2079 cases of chronic radiation sickness; 120 cases of plutonium pneumoscelarosis; 5 cases of radiation cataracts; and over 400 cases of local radiation injuries. The report will present dependences of the observed effects on absorbed radiation dose and dose rate in terms of acute radiation

  8. Pest persistence and eradication conditions in a deterministic model for sterile insect release.

    Science.gov (United States)

    Gordillo, Luis F

    2015-01-01

    The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.

  9. MIMO capacity for deterministic channel models: sublinear growth

    DEFF Research Database (Denmark)

    Bentosela, Francois; Cornean, Horia; Marchetti, Nicola

    2013-01-01

    . In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....

  10. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  11. Deterministic Single-Photon Source for Distributed Quantum Networking

    International Nuclear Information System (INIS)

    Kuhn, Axel; Hennrich, Markus; Rempe, Gerhard

    2002-01-01

    A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing

  12. The deterministic optical alignment of the HERMES spectrograph

    Science.gov (United States)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  13. Minimal access direct spondylolysis repair using a pedicle screw-rod system: a case series

    Directory of Open Access Journals (Sweden)

    Mohi Eldin Mohamed

    2012-11-01

    Full Text Available Abstract Introduction Symptomatic spondylolysis is always challenging to treat because the pars defect causing the instability needs to be stabilized while segmental fusion needs to be avoided. Direct repair of the pars defect is ideal in cases of spondylolysis in which posterior decompression is not necessary. We report clinical results using segmental pedicle-screw-rod fixation with bone grafting in patients with symptomatic spondylolysis, a modification of a technique first reported by Tokuhashi and Matsuzaki in 1996. We also describe the surgical technique, assess the fusion and analyze the outcomes of patients. Case presentation At Cairo University Hospital, eight out of twelve Egyptian patients’ acute pars fractures healed after conservative management. Of those, two young male patients underwent an operative procedure for chronic low back pain secondary to pars defect. Case one was a 25-year-old Egyptian man who presented with a one-year history of axial low back pain, not radiating to the lower limbs, after falling from height. Case two was a 29-year-old Egyptian man who presented with a one-year history of axial low back pain and a one-year history of mild claudication and infrequent radiation to the leg, never below the knee. Utilizing a standardized mini-access fluoroscopically-guided surgical protocol, fixation was established with two titanium pedicle screws place into both pedicles, at the same level as the pars defect, without violating the facet joint. The cleaned pars defect was grafted; a curved titanium rod was then passed under the base of the spinous process of the affected vertebra, bridging the loose fragment, and attached to the pedicle screw heads, to uplift the spinal process, followed by compression of the defect. The patients were discharged three days after the procedure, with successful fusion at one-year follow-up. No rod breakage or implant-related complications were reported. Conclusions Where there is no

  14. Malignant arrhythmia as the first manifestation of Wolff-Parkinson-White syndrome: a case with minimal preexcitation on electrocardiography.

    Science.gov (United States)

    Gungor, B; Alper, A T

    2013-09-01

    Wolff-Parkinson-White (WPW) syndrome is defined as the presence of an accessory atrioventricular pathway which is manifested as delta waves and short PR interval on electrocardiography (ECG). However, some WPW cases do not have typical findings on ECG and may remain undiagnosed unless palpitations occur. Sudden cardiac death may be the first manifestation of WPW and develops mostly secondary to degeneration of atrial fibrillation into ventricular fibrillation. In this report, we present a case of undiagnosed WPW with minimal preexcitation on ECG and who suffered an episode of malignant arrhythmia as the first manifestation of the disease.

  15. Development and application of a deterministic-realistic hybrid methodology for LOCA licensing analysis

    International Nuclear Information System (INIS)

    Liang, Thomas K.S.; Chou, Ling-Yao; Zhang, Zhongwei; Hsueh, Hsiang-Yu; Lee, Min

    2011-01-01

    Highlights: → A new LOCA licensing methodology (DRHM, deterministic-realistic hybrid methodology) was developed. → DRHM involves conservative Appendix K physical models and statistical treatment of plant status uncertainties. → DRHM can generate 50-100 K PCT margin as compared to a traditional Appendix K methodology. - Abstract: It is well recognized that a realistic LOCA analysis with uncertainty quantification can generate greater safety margin as compared with classical conservative LOCA analysis using Appendix K evaluation models. The associated margin can be more than 200 K. To quantify uncertainty in BELOCA analysis, generally there are two kinds of uncertainties required to be identified and quantified, which involve model uncertainties and plant status uncertainties. Particularly, it will take huge effort to systematically quantify individual model uncertainty of a best estimate LOCA code, such as RELAP5 and TRAC. Instead of applying a full ranged BELOCA methodology to cover both model and plant status uncertainties, a deterministic-realistic hybrid methodology (DRHM) was developed to support LOCA licensing analysis. Regarding the DRHM methodology, Appendix K deterministic evaluation models are adopted to ensure model conservatism, while CSAU methodology is applied to quantify the effect of plant status uncertainty on PCT calculation. Generally, DRHM methodology can generate about 80-100 K margin on PCT as compared to Appendix K bounding state LOCA analysis.

  16. Deterministic effects of interventional radiology procedures

    International Nuclear Information System (INIS)

    Shope, Thomas B.

    1997-01-01

    The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)

  17. Primality deterministic and primality probabilistic tests

    Directory of Open Access Journals (Sweden)

    Alfredo Rizzi

    2007-10-01

    Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.

  18. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  19. Accessing the dark exciton spin in deterministic quantum-dot microlenses

    Science.gov (United States)

    Heindel, Tobias; Thoma, Alexander; Schwartz, Ido; Schmidgall, Emma R.; Gantz, Liron; Cogan, Dan; Strauß, Max; Schnauber, Peter; Gschrey, Manuel; Schulze, Jan-Hindrik; Strittmatter, Andre; Rodt, Sven; Gershoni, David; Reitzenstein, Stephan

    2017-12-01

    The dark exciton state in semiconductor quantum dots (QDs) constitutes a long-lived solid-state qubit which has the potential to play an important role in implementations of solid-state-based quantum information architectures. In this work, we exploit deterministically fabricated QD microlenses which promise enhanced photon extraction, to optically prepare and read out the dark exciton spin and observe its coherent precession. The optical access to the dark exciton is provided via spin-blockaded metastable biexciton states acting as heralding states, which are identified by deploying polarization-sensitive spectroscopy as well as time-resolved photon cross-correlation experiments. Our experiments reveal a spin-precession period of the dark exciton of (0.82 ± 0.01) ns corresponding to a fine-structure splitting of (5.0 ± 0.7) μeV between its eigenstates |↑ ⇑ ±↓ ⇓ ⟩. By exploiting microlenses deterministically fabricated above pre-selected QDs, our work demonstrates the possibility to scale up implementations of quantum information processing schemes using the QD-confined dark exciton spin qubit, such as the generation of photonic cluster states or the realization of a solid-state-based quantum memory.

  20. Det-WiFi: A Multihop TDMA MAC Implementation for Industrial Deterministic Applications Based on Commodity 802.11 Hardware

    Directory of Open Access Journals (Sweden)

    Yujun Cheng

    2017-01-01

    Full Text Available Wireless control system for industrial automation has been gaining increasing popularity in recent years thanks to their ease of deployment and the low cost of their components. However, traditional low sample rate industrial wireless sensor networks cannot support high-speed application, while high-speed IEEE 802.11 networks are not designed for real-time application and not able to provide deterministic feature. Thus, in this paper, we propose Det-WiFi, a real-time TDMA MAC implementation for high-speed multihop industrial application. It is able to support high-speed applications and provide deterministic network features since it combines the advantages of high-speed IEEE802.11 physical layer and a software Time Division Multiple Access (TDMA based MAC layer. We implement Det-WiFi on commercial off-the-shelf hardware and compare the deterministic performance between 802.11s and Det-WiFi under the real industrial environment, which is full of field devices and industrial equipment. We changed the hop number and the packet payload size in each experiment, and all of the results show that Det-WiFi has better deterministic performance.

  1. A combined deterministic and probabilistic procedure for safety assessment of components with cracks - Handbook.

    Energy Technology Data Exchange (ETDEWEB)

    Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))

    2008-07-01

    SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants

  2. Deterministic and heuristic models of forecasting spare parts demand

    Directory of Open Access Journals (Sweden)

    Ivan S. Milojević

    2012-04-01

    Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing

  3. Quantifying diffusion MRI tractography of the corticospinal tract in brain tumors with deterministic and probabilistic methods.

    Science.gov (United States)

    Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S; Henry, Roland G

    2013-01-01

    sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and

  4. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  5. Response-surface models for deterministic effects of localized irradiation of the skin by discrete {beta}/{gamma} -emitting sources

    Energy Technology Data Exchange (ETDEWEB)

    Scott, B.R.

    1995-12-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete {Beta}- and {gamma}-emitting ({Beta}{gamma}E) sources (e.g., {Beta}{gamma}E hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot {Beta}{gamma}E particles are {sup 60}Co- or nuclear fuel-derived particles with diameters > 10 {mu}m and < 3 mm and contain at least 3.7 kBq (0.1 {mu}Ci) of radioactivity. For such {Beta}{gamma}E sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for systems that adequately control exposure of workers to discrete {Beta}{gamma}E sources, models are needed for evaluating the risk of deterministic effects of localized {Beta} irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete {Beta}{gamma}E sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to {Beta} radiation from {Beta}{gamma}E sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects.

  6. Response-surface models for deterministic effects of localized irradiation of the skin by discrete β/γ -emitting sources

    International Nuclear Information System (INIS)

    Scott, B.R.

    1995-01-01

    Individuals who work at nuclear reactor facilities can be at risk for deterministic effects in the skin from exposure to discrete Β- and γ-emitting (ΒγE) sources (e.g., ΒγE hot particles) on the skin or clothing. Deterministic effects are non-cancer effects that have a threshold and increase in severity as dose increases (e.g., ulcer in skin). Hot ΒγE particles are 60 Co- or nuclear fuel-derived particles with diameters > 10 μm and < 3 mm and contain at least 3.7 kBq (0.1 μCi) of radioactivity. For such ΒγE sources on the skin, it is the beta component of the dose that is most important. To develop exposure limitation systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for systems that adequately control exposure of workers to discrete ΒγE sources, models are needed for evaluating the risk of deterministic effects of localized Β irradiation of the skin. The purpose of this study was to develop dose-rate and irradiated-area dependent, response-surface models for evaluating risks of significant deterministic effects of localized irradiation of the skin by discrete ΒγE sources and to use modeling results to recommend approaches to limiting occupational exposure to such sources. The significance of the research results as follows: (1) response-surface models are now available for evaluating the risk of specific deterministic effects of localized irradiation of the skin; (2) modeling results have been used to recommend approaches to limiting occupational exposure of workers to Β radiation from ΒγE sources on the skin or on clothing; and (3) the generic irradiated-volume, weighting-factor approach to limiting exposure can be applied to other organs including the eye, the ear, and organs of the respiratory or gastrointestinal tract and can be used for both deterministic and stochastic effects

  7. Null-polygonal minimal surfaces in AdS4 from perturbed W minimal models

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Ito, Katsushi; Satoh, Yuji

    2012-11-01

    We study the null-polygonal minimal surfaces in AdS 4 , which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4) 4 /U(1) n-5 generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS 3 case.

  8. Field-free deterministic ultrafast creation of magnetic skyrmions by spin-orbit torques

    Science.gov (United States)

    Büttner, Felix; Lemesh, Ivan; Schneider, Michael; Pfau, Bastian; Günther, Christian M.; Hessing, Piet; Geilhufe, Jan; Caretta, Lucas; Engel, Dieter; Krüger, Benjamin; Viefhaus, Jens; Eisebitt, Stefan; Beach, Geoffrey S. D.

    2017-11-01

    Magnetic skyrmions are stabilized by a combination of external magnetic fields, stray field energies, higher-order exchange interactions and the Dzyaloshinskii-Moriya interaction (DMI). The last favours homochiral skyrmions, whose motion is driven by spin-orbit torques and is deterministic, which makes systems with a large DMI relevant for applications. Asymmetric multilayers of non-magnetic heavy metals with strong spin-orbit interactions and transition-metal ferromagnetic layers provide a large and tunable DMI. Also, the non-magnetic heavy metal layer can inject a vertical spin current with transverse spin polarization into the ferromagnetic layer via the spin Hall effect. This leads to torques that can be used to switch the magnetization completely in out-of-plane magnetized ferromagnetic elements, but the switching is deterministic only in the presence of a symmetry-breaking in-plane field. Although spin-orbit torques led to domain nucleation in continuous films and to stochastic nucleation of skyrmions in magnetic tracks, no practical means to create individual skyrmions controllably in an integrated device design at a selected position has been reported yet. Here we demonstrate that sub-nanosecond spin-orbit torque pulses can generate single skyrmions at custom-defined positions in a magnetic racetrack deterministically using the same current path as used for the shifting operation. The effect of the DMI implies that no external in-plane magnetic fields are needed for this aim. This implementation exploits a defect, such as a constriction in the magnetic track, that can serve as a skyrmion generator. The concept is applicable to any track geometry, including three-dimensional designs.

  9. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    Directory of Open Access Journals (Sweden)

    Scott Ferrenberg

    2016-10-01

    Full Text Available Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species and belowground (species active in organic and mineral soil layers arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community and modified Winkler funnels (belowground community and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the

  10. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    Science.gov (United States)

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod

  11. Using the deterministic factor systems in the analysis of return on ...

    African Journals Online (AJOL)

    Using the deterministic factor systems in the analysis of return on equity. ... or equal the profitability of bank deposits, the business of the organization is not efficient. ... Application of quantitative and qualitative indicators in the analysis allows to ... By Country · List All Titles · Free To Read Titles This Journal is Open Access.

  12. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  13. Using MCBEND for neutron or gamma-ray deterministic calculations

    Directory of Open Access Journals (Sweden)

    Geoff Dobson

    2017-01-01

    Full Text Available MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  14. Application of deterministic and probabilistic methods in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2007-01-01

    The economic equipment replacement problem is one of the oldest questions in Production Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost, etc. New equipment, however, require a higher initial investment and thus a higher opportunity cost, and impose special training of the labor force. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs but in contrast having lower financial, insurance, and opportunity costs. The weighting of all these costs can be made with the various methods presented. The aim of this paper is to discuss deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. (author)

  15. Implicational Scaling of Reading Comprehension Construct: Is it Deterministic or Probabilistic?

    Directory of Open Access Journals (Sweden)

    Parisa Daftarifard

    2016-05-01

    In English as a Second Language Teaching and Testing situations, it is common to infer about learners’ reading ability based on his or her total score on a reading test. This assumes the unidimensional and reproducible nature of reading items. However, few researches have been conducted to probe the issue through psychometric analyses. In the present study, the IELTS exemplar module C (1994 was administered to 503 Iranian students of various reading comprehension ability levels. Both the deterministic and probabilistic psychometric models of unidimensionality were employed to examine the plausible existence of implicational scaling among reading items in the mentioned reading test. Based on the results, it was concluded that the reading data in this study did not show a deterministic unidimensional scale (Guttman scaling; rather, it revealed a probabilistic one (Rasch model. As the person map of the measures failed to show a meaningful hierarchical order for the items, these results call into question the assumption of implicational scaling that is normally practiced in scoring reading items.

  16. Optimal power flow: a bibliographic survey I. Formulations and deterministic methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  17. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  18. Inferring hierarchical clustering structures by deterministic annealing

    International Nuclear Information System (INIS)

    Hofmann, T.; Buhmann, J.M.

    1996-01-01

    The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees

  19. Deterministic calculations of radiation doses from brachytherapy seeds

    International Nuclear Information System (INIS)

    Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos

    2009-01-01

    Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)

  20. Minimally Invasive Surgery in Thymic Malignances

    Directory of Open Access Journals (Sweden)

    Wentao FANG

    2018-04-01

    Full Text Available Surgery is the most important therapy for thymic malignances. The last decade has seen increasing adoption of minimally invasive surgery (MIS for thymectomy. MIS for early stage thymoma patients has been shown to yield similar oncological results while being helpful in minimize surgical trauma, improving postoperative recovery, and reduce incisional pain. Meanwhile, With the advance in surgical techniques, the patients with locally advanced thymic tumors, preoperative induction therapies or recurrent diseases, may also benefit from MIS in selected cases.

  1. A note on limited pushdown alphabets in stateless deterministic pushdown automata

    Czech Academy of Sciences Publication Activity Database

    Masopust, Tomáš

    2013-01-01

    Roč. 24, č. 3 (2013), s. 319-328 ISSN 0129-0541 R&D Projects: GA ČR(CZ) GPP202/11/P028 Institutional support: RVO:67985840 Keywords : deterministic pushdown automata * stateless pushdown automata * realtime pushdown automata Subject RIV: BA - General Mathematics Impact factor: 0.326, year: 2013 http://www.worldscientific.com/doi/abs/10.1142/S0129054113500068

  2. Deterministic Chaos in Radon Time Variation

    International Nuclear Information System (INIS)

    Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.

    2003-01-01

    Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)

  3. Radon time variations and deterministic chaos

    Energy Technology Data Exchange (ETDEWEB)

    Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V

    2004-07-01

    Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere.

  4. Radon time variations and deterministic chaos

    International Nuclear Information System (INIS)

    Planinic, J.; Vukovic, B.; Radolic, V.

    2004-01-01

    Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere

  5. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  6. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  7. Minimal abdominal incisions

    Directory of Open Access Journals (Sweden)

    João Carlos Magi

    2017-04-01

    Full Text Available Minimally invasive procedures aim to resolve the disease with minimal trauma to the body, resulting in a rapid return to activities and in reductions of infection, complications, costs and pain. Minimally incised laparotomy, sometimes referred to as minilaparotomy, is an example of such minimally invasive procedures. The aim of this study is to demonstrate the feasibility and utility of laparotomy with minimal incision based on the literature and exemplifying with a case. The case in question describes reconstruction of the intestinal transit with the use of this incision. Male, young, HIV-positive patient in a late postoperative of ileotiflectomy, terminal ileostomy and closing of the ascending colon by an acute perforating abdomen, due to ileocolonic tuberculosis. The barium enema showed a proximal stump of the right colon near the ileostomy. The access to the cavity was made through the orifice resulting from the release of the stoma, with a lateral-lateral ileo-colonic anastomosis with a 25 mm circular stapler and manual closure of the ileal stump. These surgeries require their own tactics, such as rigor in the lysis of adhesions, tissue traction, and hemostasis, in addition to requiring surgeon dexterity – but without the need for investments in technology; moreover, the learning curve is reported as being lower than that for videolaparoscopy. Laparotomy with minimal incision should be considered as a valid and viable option in the treatment of surgical conditions. Resumo: Procedimentos minimamente invasivos visam resolver a doença com o mínimo de trauma ao organismo, resultando em retorno rápido às atividades, reduções nas infecções, complicações, custos e na dor. A laparotomia com incisão mínima, algumas vezes referida como minilaparotomia, é um exemplo desses procedimentos minimamente invasivos. O objetivo deste trabalho é demonstrar a viabilidade e utilidade das laparotomias com incisão mínima com base na literatura e

  8. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-12

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.

  9. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    Science.gov (United States)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  10. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  11. Neutronics comparative analysis of plate-type research reactor using deterministic and stochastic methods

    International Nuclear Information System (INIS)

    Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan

    2015-01-01

    Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied

  12. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    Science.gov (United States)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  13. Safety of long-distance pipelines. Probabilistic and deterministic aspects; Sicherheit von Rohrfernleitungen. Probabilistik und Deterministik im Vergleich

    Energy Technology Data Exchange (ETDEWEB)

    Hollaender, Robert [Leipzig Univ. (Germany). Inst. fuer Infrastruktur und Ressourcenmanagement

    2013-03-15

    The Committee for Long-Distance Pipelines (Berlin, Federal Republic of Germany) reported on the relation between deterministic and probabilistic approaches in order to contribute to a better understanding of the safety management of long-distance pipelines. The respective strengths and weaknesses as well as the deterministic and probabilistic fundamentals of the safety management are described. The comparison includes fundamental aspects, but is essentially determined by the special character of the technical plant 'long-distance pipeline' as an infrastructure project in the area. This special feature results to special operation conditions and related responsibilities. However, our legal system 'long-distance pipeline' does not grant the same legal position in comparison to other infrastructural facilities such as streets and railways. Thus, the question whether and in what manner the impacts from the land-use in the environment of long-distance pipelines have to be considered is again and again the initial point for the discussion on probabilistic and deterministic approaches.

  14. Minimally Invasive Surgery (MIS) Approaches to Thoracolumbar Trauma.

    Science.gov (United States)

    Kaye, Ian David; Passias, Peter

    2018-03-01

    Minimally invasive surgical (MIS) techniques offer promising improvements in the management of thoracolumbar trauma. Recent advances in MIS techniques and instrumentation for degenerative conditions have heralded a growing interest in employing these techniques for thoracolumbar trauma. Specifically, surgeons have applied these techniques to help manage flexion- and extension-distraction injuries, neurologically intact burst fractures, and cases of damage control. Minimally invasive surgical techniques offer a means to decrease blood loss, shorten operative time, reduce infection risk, and shorten hospital stays. Herein, we review thoracolumbar minimally invasive surgery with an emphasis on thoracolumbar trauma classification, minimally invasive spinal stabilization, surgical indications, patient outcomes, technical considerations, and potential complications.

  15. Performance Analysis of Recurrence Matrix Statistics for the Detection of Deterministic Signals in Noise

    National Research Council Canada - National Science Library

    Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank

    2008-01-01

    Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...

  16. On competition in a Stackelberg location-design model with deterministic supplier choice

    NARCIS (Netherlands)

    Hendrix, E.M.T.

    2016-01-01

    We study a market situation where two firms maximize market capture by deciding on the location in the plane and investing in a competing quality against investment cost. Clients choose one of the suppliers; i.e. deterministic supplier choice. To study this situation, a game theoretic model is

  17. Tsunamigenic scenarios for southern Peru and northern Chile seismic gap: Deterministic and probabilistic hybrid approach for hazard assessment

    Science.gov (United States)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.

    2017-12-01

    Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The

  18. A minimally invasive surgical approach for large cyst-like periapical lesions: a case series.

    Science.gov (United States)

    Shah, Naseem; Logani, Ajay; Kumar, Vijay

    2014-01-01

    Various conservative approaches have been utilized to manage large periapical lesions. This article presents a relatively new, very conservative technique known as surgical fenestration which is both diagnostic and curative. The technique involves partially excising the cystic lining, gently curetting the cystic cavity, performing copious irrigation, and closing the surgical site. This technique allows for decompression and allows the clinician the freedom to take a biopsy of the lesion, as well as perform other procedures such as root resection and retrograde sealing, if required. As the procedure does not perform a complete excision of the cystic lining, it is both minimally invasive and cost-effective. The technique and the concepts involved are reviewed in 4 cases treated with this novel surgical approach.

  19. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.

    2005-01-01

    Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)

  20. Minimizing Regret : The General Case

    NARCIS (Netherlands)

    Rustichini, A.

    1998-01-01

    In repeated games with differential information on one side, the labelling "general case" refers to games in which the action of the informed player is not known to the uninformed, who can only observe a signal which is the random outcome of his and his opponent's action. Here we consider the

  1. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  2. Null-polygonal minimal surfaces in AdS{sub 4} from perturbed W minimal models

    Energy Technology Data Exchange (ETDEWEB)

    Hatsuda, Yasuyuki [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ito, Katsushi [Tokyo Institute of Technology (Japan). Dept. of Physics; Satoh, Yuji [Tsukuba Univ., Sakura, Ibaraki (Japan). Inst. of Physics

    2012-11-15

    We study the null-polygonal minimal surfaces in AdS{sub 4}, which correspond to the gluon scattering amplitudes/Wilson loops in N=4 super Yang-Mills theory at strong coupling. The area of the minimal surfaces with n cusps is characterized by the thermodynamic Bethe ansatz (TBA) integral equations or the Y-system of the homogeneous sine-Gordon model, which is regarded as the SU(n-4){sub 4}/U(1){sup n-5} generalized parafermion theory perturbed by the weight-zero adjoint operators. Based on the relation to the TBA systems of the perturbed W minimal models, we solve the TBA equations by using the conformal perturbation theory, and obtain the analytic expansion of the remainder function around the UV/regular-polygonal limit for n = 6 and 7. We compare the rescaled remainder function for n=6 with the two-loop one, to observe that they are close to each other similarly to the AdS{sub 3} case.

  3. A Prospective Randomized Study on Operative Treatment for Simple Distal Tibial Fractures-Minimally Invasive Plate Osteosynthesis Versus Minimal Open Reduction and Internal Fixation.

    Science.gov (United States)

    Kim, Ji Wan; Kim, Hyun Uk; Oh, Chang-Wug; Kim, Joon-Woo; Park, Ki Chul

    2018-01-01

    To compare the radiologic and clinical results of minimally invasive plate osteosynthesis (MIPO) and minimal open reduction and internal fixation (ORIF) for simple distal tibial fractures. Randomized prospective study. Three level 1 trauma centers. Fifty-eight patients with simple and distal tibial fractures were randomized into a MIPO group (treatment with MIPO; n = 29) or a minimal group (treatment with minimal ORIF; n = 29). These numbers were designed to define the rate of soft tissue complication; therefore, validation of superiority in union time or determination of differences in rates of delayed union was limited in this study. Simple distal tibial fractures treated with MIPO or minimal ORIF. The clinical outcome measurements included operative time, radiation exposure time, and soft tissue complications. To evaluate a patient's function, the American Orthopedic Foot and Ankle Society ankle score (AOFAS) was used. Radiologic measurements included fracture alignment, delayed union, and union time. All patients acquired bone union without any secondary intervention. The mean union time was 17.4 weeks and 16.3 weeks in the MIPO and minimal groups, respectively. There was 1 case of delayed union and 1 case of superficial infection in each group. The radiation exposure time was shorter in the minimal group than in the MIPO group. Coronal angulation showed a difference between both groups. The American Orthopedic Foot and Ankle Society ankle scores were 86.0 and 86.7 in the MIPO and minimal groups, respectively. Minimal ORIF resulted in similar outcomes, with no increased rate of soft tissue problems compared to MIPO. Both MIPO and minimal ORIF have high union rates and good functional outcomes for simple distal tibial fractures. Minimal ORIF did not result in increased rates of infection and wound dehiscence. Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence.

  4. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    Science.gov (United States)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  5. Burnup-dependent core neutronics analysis of plate-type research reactor using deterministic and stochastic methods

    International Nuclear Information System (INIS)

    Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan

    2015-01-01

    Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes

  6. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  7. The minimally tuned minimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Essig, Rouven; Fortin, Jean-Francois

    2008-01-01

    The regions in the Minimal Supersymmetric Standard Model with the minimal amount of fine-tuning of electroweak symmetry breaking are presented for general messenger scale. No a priori relations among the soft supersymmetry breaking parameters are assumed and fine-tuning is minimized with respect to all the important parameters which affect electroweak symmetry breaking. The superpartner spectra in the minimally tuned region of parameter space are quite distinctive with large stop mixing at the low scale and negative squark soft masses at the high scale. The minimal amount of tuning increases enormously for a Higgs mass beyond roughly 120 GeV

  8. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  9. Condensation and homogenization of cross sections for the deterministic transport codes with Monte Carlo method: Application to the GEN IV fast neutron reactors

    International Nuclear Information System (INIS)

    Cai, Li

    2014-01-01

    In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3 for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4). At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4 code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation. Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries. Finally, a B1 leakage model is implemented in the TRIPOLI-4 code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPOLI-4 code allows producing multi-group constants which can then be used in the core

  10. Combination of the deterministic and probabilistic approaches for risk-informed decision-making in US NRC regulatory guides

    International Nuclear Information System (INIS)

    Patrik, M.; Babic, P.

    2001-06-01

    The report responds to the trend where probabilistic safety analyses are attached, on a voluntary basis (as yet), to the mandatory deterministic assessment of modifications of NPP systems or operating procedures, resulting in risk-informed type documents. It contains a nearly complete Czech translation of US NRC Regulatory Guide 1.177 and presents some suggestions for improving a) PSA study applications; b) the development of NPP documents for the regulatory body; and c) the interconnection between PSA and traditional deterministic analyses as contained in the risk-informed approach. (P.A.)

  11. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  12. A toolkit for integrated deterministic and probabilistic assessment for hydrogen infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Groth, Katrina M.; Tchouvelev, Andrei V.

    2014-03-01

    There has been increasing interest in using Quantitative Risk Assessment [QRA] to help improve the safety of hydrogen infrastructure and applications. Hydrogen infrastructure for transportation (e.g. fueling fuel cell vehicles) or stationary (e.g. back-up power) applications is a relatively new area for application of QRA vs. traditional industrial production and use, and as a result there are few tools designed to enable QRA for this emerging sector. There are few existing QRA tools containing models that have been developed and validated for use in small-scale hydrogen applications. However, in the past several years, there has been significant progress in developing and validating deterministic physical and engineering models for hydrogen dispersion, ignition, and flame behavior. In parallel, there has been progress in developing defensible probabilistic models for the occurrence of events such as hydrogen release and ignition. While models and data are available, using this information is difficult due to a lack of readily available tools for integrating deterministic and probabilistic components into a single analysis framework. This paper discusses the first steps in building an integrated toolkit for performing QRA on hydrogen transportation technologies and suggests directions for extending the toolkit.

  13. The minimal non-minimal standard model

    International Nuclear Information System (INIS)

    Bij, J.J. van der

    2006-01-01

    In this Letter I discuss a class of extensions of the standard model that have a minimal number of possible parameters, but can in principle explain dark matter and inflation. It is pointed out that the so-called new minimal standard model contains a large number of parameters that can be put to zero, without affecting the renormalizability of the model. With the extra restrictions one might call it the minimal (new) non-minimal standard model (MNMSM). A few hidden discrete variables are present. It is argued that the inflaton should be higher-dimensional. Experimental consequences for the LHC and the ILC are discussed

  14. Top-down fabrication of plasmonic nanostructures for deterministic coupling to single quantum emitters

    NARCIS (Netherlands)

    Pfaff, W.; Vos, A.; Hanson, R.

    2013-01-01

    Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures

  15. Deterministic and probabilistic crack growth analysis for the JRC Ispra 1/5 scale pressure vessel n0 R2

    International Nuclear Information System (INIS)

    Bruckner-Foit, A.; Munz, D.

    1989-10-01

    A deterministic and a probabilistic crack growth analysis is presented for the major defects found in the welds during ultrasonic pre-service inspection. The deterministic analysis includes first a determination of the number of load cycles until crack initiation, then a cycle-by-cycle calculation of the growth of the embedded elliptical cracks, followed by an evaluation of the growth of the semi-elliptical surface crack formed after the crack considered has broken through the wall and, finally, a determination of the critical crack size and shape. In the probabilistic analysis, a Monte-Carlo simulation is performed with a sample of cracks where the statistical distributions of the crack dimensions describe the uncertainty in sizing of the ultrasonic inspection. The distributions of crack depth, crack length and location are evaluated as a function of the number of load cycles. In the simulation, the fracture mechanics model of the deterministic analysis is employed for each random crack. The results of the deterministic and probabilistic crack growth analysis are compared with the results of the second in-service inspection where stable extension of some of the cracks had been observed. It is found that the prediction and the experiment agree only with a probability of the order of 5% or less

  16. Stochastic and Deterministic Models for the Metastatic Emission Process: Formalisms and Crosslinks.

    Science.gov (United States)

    Gomez, Christophe; Hartung, Niklas

    2018-01-01

    Although the detection of metastases radically changes prognosis of and treatment decisions for a cancer patient, clinically undetectable micrometastases hamper a consistent classification into localized or metastatic disease. This chapter discusses mathematical modeling efforts that could help to estimate the metastatic risk in such a situation. We focus on two approaches: (1) a stochastic framework describing metastatic emission events at random times, formalized via Poisson processes, and (2) a deterministic framework describing the micrometastatic state through a size-structured density function in a partial differential equation model. Three aspects are addressed in this chapter. First, a motivation for the Poisson process framework is presented and modeling hypotheses and mechanisms are introduced. Second, we extend the Poisson model to account for secondary metastatic emission. Third, we highlight an inherent crosslink between the stochastic and deterministic frameworks and discuss its implications. For increased accessibility the chapter is split into an informal presentation of the results using a minimum of mathematical formalism and a rigorous mathematical treatment for more theoretically interested readers.

  17. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  18. Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics

    Science.gov (United States)

    Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol

    2015-09-01

    The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.

  19. Development of Deterministic and Probabilistic Fracture Mechanics Analysis Code PROFAS-RV for Reactor Pressure Vessel - Progress of the Work

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Min; Lee, Bong Sang [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, a deterministic/probabilistic fracture mechanics analysis program for reactor pressure vessel, PROFAS-RV, is developed. This program can evaluate failure probability of RPV using recent radiation embrittlement model of 10CFR50.61a and stress intensity factor calculation method of RCC-MRx code as well as the required basic functions of PFM program. Applications of some new radiation embrittlement model, material database, calculation method of stress intensity factors, and others which can improve fracture mechanics assessment of RPV are introduced. The purpose of this study is to develop a probabilistic fracture mechanics (PFM) analysis program for RPV considering above modification and application of newly developed models and calculation methods. In this paper, it deals with the development progress of the PFM analysis program for RPV, PROFAS-RV. The PROFAS-RV is being tested with other codes, and it is expected to revise and upgrade by reflecting the latest model and calculation method continuously. These efforts can minimize the uncertainty of the integrity evaluation for the reactor pressure vessel.

  20. Probabilistic Properties of Rectilinear Steiner Minimal Trees

    Directory of Open Access Journals (Sweden)

    V. N. Salnikov

    2015-01-01

    Full Text Available This work concerns the properties of Steiner minimal trees for the manhattan plane in the context of introducing a probability measure. This problem is important because exact algorithms to solve the Steiner problem are computationally expensive (NP-hard and the solution (especially in the case of big number of points to be connected has a diversity of practical applications. That is why the work considers a possibility to rank the possible topologies of the minimal trees with respect to a probability of their usage. For this, the known facts about the structural properties of minimal trees for selected metrics have been analyzed to see their usefulness for the problem in question. For the small amount of boundary (fixed vertices, the paper offers a way to introduce a probability measure as a corollary of proved theorem about some structural properties of the minimal trees.This work is considered to further the previous similar activity concerning a problem of searching for minimal fillings, and it is a door opener to the more general (complicated task. The stated method demonstrates the possibility to reach the final result analytically, which gives a chance of its applicability to the case of the bigger number of boundary vertices (probably, with the use of computer engineering.The introducing definition of an essential Steiner point allowed a considerable restriction of the ambiguity of initial problem solution and, at the same time, comparison of such an approach with more classical works in the field concerned. The paper also lists main barriers of classical approaches, preventing their use for the task of introducing a probability measure.In prospect, application areas of the described method are expected to be wider both in terms of system enlargement (the number of boundary vertices and in terms of other metric spaces (the Euclidean case is of especial interest. The main interest is to find the classes of topologies with significantly

  1. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  2. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  3. Minimally invasive esophagectomy for cancer: Single center experience after 44 consecutive cases

    Directory of Open Access Journals (Sweden)

    Bjelović Miloš

    2015-01-01

    Full Text Available Introduction. At the Department of Minimally Invasive Upper Digestive Surgery of the Hospital for Digestive Surgery in Belgrade, hybrid minimally invasive esophagectomy (hMIE has been a standard of care for patients with resectable esophageal cancer since 2009. As a next and final step in the change management, from January 2015 we utilized total minimally invasive esophagectomy (tMIE as a standard of care. Objective. The aim of the study was to report initial experiences in hMIE (laparoscopic approach for cancer and analyze surgical technique, major morbidity and 30-day mortality. Methods. A retrospective cohort study included 44 patients who underwent elective hMIE for esophageal cancer at the Department for Minimally Invasive Upper Digestive Surgery, Hospital for Digestive Surgery, Clinical Center of Serbia in Belgrade from April 2009 to December 2014. Results. There were 16 (36% middle thoracic esophagus tumors and 28 (64% tumors of distal thoracic esophagus. Mean duration of the operation was 319 minutes (approximately five hours and 20 minutes. The average blood loss was 173.6 ml. A total of 12 (27% of patients had postoperative complications and mean intensive care unit stay was 2.8 days. Mean hospital stay after surgery was 16 days. The average number of harvested lymph nodes during surgery was 31.9. The overall 30-day mortality rate within 30 days after surgery was 2%. Conclusion. As long as MIE is an oncological equivalent to open esophagectomy (OE, better relation between cost savings and potentially increased effectiveness will make MIE the preferred approach in high-volume esophageal centers that are experienced in minimally invasive procedures.

  4. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    International Nuclear Information System (INIS)

    Tagziria, H.

    2000-02-01

    Monte Carlo technique is that the solutions are given at specific locations only, are statistically fluctuating and are arrived at with lots of computer effort. Sooner rather than later, however, one would expect that powerful variance reductions and ever-faster processor machines would balance these disadvantages out. This is especially true if one considers the rapid advances in computer technology and parallel computers, which can achieve a 300, fold faster convergence. In many fields and cases the user would, however, benefit greatly by considering when possible alternative methods to the Monte Carlo technique, such as deterministic methods, at least as a way of validation. It can be shown in fact, that for less complex problems a deterministic approach can have many advantages. In its earlier manifestations, Monte Carlo simulation was primarily performed by experts who were intimately involved in the development of the computer code. Increasingly, however, codes are being supplied as relatively user-friendly packages for widespread use, which allows them to be used by those with less specialist knowledge. This enables them to be used as 'black boxes', which in turn provides scope for costly errors, especially in the choice of cross section data and accelerator techniques. The Monte Carlo method as employed with modem computers goes back several decades, and nowadays science and software libraries would be virtually empty if one excluded work that is either directly or indirectly related to this technique. This is specifically true in the fields of 'computational dosimetry', 'radiation protection' and radiation transport in general. Hundreds of codes have been written and applied with various degrees of success. Some of these have become trademarks, generally well supported and taken over by the thousands of users. Other codes, which should be encouraged, are the so-called in house codes, which still serve well their developers' and their groups' in their intended

  5. Stochastic Simulation of Integrated Circuits with Nonlinear Black-Box Components via Augmented Deterministic Equivalents

    Directory of Open Access Journals (Sweden)

    MANFREDI, P.

    2014-11-01

    Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.

  6. On a minimization of the eigenvalues of Schroedinger operator relatively domains

    International Nuclear Information System (INIS)

    Gasymov, Yu.S.; Niftiev, A.A.

    2001-01-01

    Minimization of the eigenvalues plays an important role in the operators spectral theory. The problem on the minimization of the eigenvalues of the Schroedinger operator by areas is considered in this work. The algorithm, analogous to the conditional gradient method, is proposed for the numerical solution of this problem in the common case. The result is generalized for the case of the positively determined completely continuous operator [ru

  7. Absorbing phase transitions in deterministic fixed-energy sandpile models

    Science.gov (United States)

    Park, Su-Chan

    2018-03-01

    We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.

  8. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  9. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...

  10. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    NARCIS (Netherlands)

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcao

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with

  11. Temperature regulates deterministic processes and the succession of microbial interactions in anaerobic digestion process

    Czech Academy of Sciences Publication Activity Database

    Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.

    2017-01-01

    Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016

  12. Configurable Crossbar Switch for Deterministic, Low-latency Inter-blade Communications in a MicroTCA Platform

    Energy Technology Data Exchange (ETDEWEB)

    Karamooz, Saeed [Vadatech Inc. (United States); Breeding, John Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Justice, T Alan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-08-01

    As MicroTCA expands into applications beyond the telecommunications industry from which it originated, it faces new challenges in the area of inter-blade communications. The ability to achieve deterministic, low-latency communications between blades is critical to realizing a scalable architecture. In the past, legacy bus architectures accomplished inter-blade communications using dedicated parallel buses across the backplane. Because of limited fabric resources on its backplane, MicroTCA uses the carrier hub (MCH) for this purpose. Unfortunately, MCH products from commercial vendors are limited to standard bus protocols such as PCI Express, Serial Rapid IO and 10/40GbE. While these protocols have exceptional throughput capability, they are neither deterministic nor necessarily low-latency. To overcome this limitation, an MCH has been developed based on the Xilinx Virtex-7 690T FPGA. This MCH provides the system architect/developer complete flexibility in both the interface protocol and routing of information between blades. In this paper, we present the application of this configurable MCH concept to the Machine Protection System under development for the Spallation Neutron Sources's proton accelerator. Specifically, we demonstrate the use of the configurable MCH as a 12x4-lane crossbar switch using the Aurora protocol to achieve a deterministic, low-latency data link. In this configuration, the crossbar has an aggregate bandwidth of 48 GB/s.

  13. Deterministic nonlinear phase gates induced by a single qubit

    Science.gov (United States)

    Park, Kimin; Marek, Petr; Filip, Radim

    2018-05-01

    We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.

  14. Moving beyond resistance to restraint minimization: a case study of change management in aged care.

    Science.gov (United States)

    Johnson, Susan; Ostaszkiewicz, Joan; O'Connell, Beverly

    2009-01-01

    This case study describes a quality initiative to minimize restraint in an Australian residential aged care facility. The process of improving practice is examined with reference to the literature on implementation of research into practice and change management. The differences between planned and emergent approaches to change management are discussed. The concepts of resistance and attractors are explored in relation to our experiences of managing the change process in this initiative. The importance of the interpersonal interactions that were involved in facilitating the change process is highlighted. Recommendations are offered for dealing with change management processes in clinical environments, particularly the need to move beyond an individual mind-set to a systems-based approach for quality initiatives in residential aged care.

  15. Blackfolds, plane waves and minimal surfaces

    Science.gov (United States)

    Armas, Jay; Blau, Matthias

    2015-07-01

    Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.

  16. Blackfolds, plane waves and minimal surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Armas, Jay [Physique Théorique et Mathématique, Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels (Belgium); Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland); Blau, Matthias [Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland)

    2015-07-29

    Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.

  17. Deterministic simulation of first-order scattering in virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D

    2004-07-01

    A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.

  18. What is Quantum Mechanics? A Minimal Formulation

    Science.gov (United States)

    Friedberg, R.; Hohenberg, P. C.

    2018-03-01

    This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called "microscopic theory", applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen-Specker-Bell theorem and Gleason's theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory.

  19. MIRD Commentary: Proposed Name for a Dosimetry Unit Applicable to Deterministic Biological Effects-The Barendsen (Bd)

    International Nuclear Information System (INIS)

    Sgouros, George; Howell, R. W.; Bolch, Wesley E.; Fisher, Darrell R.

    2009-01-01

    The fundamental physical quantity for relating all biologic effects to radiation exposure is the absorbed dose, the energy imparted per unit mass of tissue. Absorbed dose is expressed in units of joules per kilogram (J/kg) and is given the special name gray (Gy). Exposure to ionizing radiation may cause both deterministic and stochastic biologic effects. To account for the relative effect per unit absorbed dose that has been observed for different types of radiation, the International Commission on Radiological Protection (ICRP) has established radiation weighting factors for stochastic effects. The product of absorbed dose in Gy and the radiation weighting factor is defined as the equivalent dose. Equivalent dose values are designated by a special named unit, the sievert (Sv). Unlike the situation for stochastic effects, no well-defined formalism and associated special named quantities have been widely adopted for deterministic effects. The therapeutic application of radionuclides and, specifically, -particle emitters in nuclear medicine has brought to the forefront the need for a well-defined dosimetry formalism applicable to deterministic effects that is accompanied by corresponding special named quantities. This commentary reviews recent proposals related to this issue and concludes with a recommendation to establish a new named quantity

  20. A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.

    Science.gov (United States)

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.

  1. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  2. OCA-P, a deterministic and probabilistic fracture-mechanics code for application to pressure vessels

    International Nuclear Information System (INIS)

    Cheverton, R.D.; Ball, D.G.

    1984-05-01

    The OCA-P code is a probabilistic fracture-mechanics code that was prepared specifically for evaluating the integrity of pressurized-water reactor vessels when subjected to overcooling-accident loading conditions. The code has two-dimensional- and some three-dimensional-flaw capability; it is based on linear-elastic fracture mechanics; and it can treat cladding as a discrete region. Both deterministic and probabilistic analyses can be performed. For the former analysis, it is possible to conduct a search for critical values of the fluence and the nil-ductility reference temperature corresponding to incipient initiation of the initial flaw. The probabilistic portion of OCA-P is based on Monte Carlo techniques, and simulated parameters include fluence, flaw depth, fracture toughness, nil-ductility reference temperature, and concentrations of copper, nickel, and phosphorous. Plotting capabilities include the construction of critical-crack-depth diagrams (deterministic analysis) and various histograms (probabilistic analysis)

  3. Hydrogen atom in momentum space with a minimal length

    International Nuclear Information System (INIS)

    Bouaziz, Djamil; Ferkous, Nourredine

    2010-01-01

    A momentum representation treatment of the hydrogen atom problem with a generalized uncertainty relation, which leads to a minimal length ΔX imin =(ℎ/2π)√(3β+β ' ), is presented. We show that the distance squared operator can be factorized in the case β ' =2β. We analytically solve the s-wave bound-state equation. The leading correction to the energy spectrum caused by the minimal length depends on √(β). An upper bound for the minimal length is found to be about 10 -9 fm.

  4. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    OpenAIRE

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-01-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This...

  5. A deterministic - approach controller design for electrohydraulic position servo control system

    International Nuclear Information System (INIS)

    Johari Osman

    2000-01-01

    This paper is concerned with the design of a tracking controller for controlling electrohydraulic position servo system based on a deterministic approach. The system is treated as an uncertain system with bounded uncertainties where the bounds are assumed known. It will be shown that the electrohydraulic position servo systems with the proposed controller is practically stable and tracks the desired position in spite of the uncertainties and nonlinearities present in the system (author)

  6. Performance assessment of deterministic and probabilistic weather predictions for the short-term optimization of a tropical hydropower reservoir

    Science.gov (United States)

    Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter

    2016-04-01

    Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of

  7. Steiner minimal trees in small neighbourhoods of points in Riemannian manifolds

    Science.gov (United States)

    Chikin, V. M.

    2017-07-01

    In contrast to the Euclidean case, almost no Steiner minimal trees with concrete boundaries on Riemannian manifolds are known. A result describing the types of Steiner minimal trees on a Riemannian manifold for arbitrary small boundaries is obtained. As a consequence, it is shown that for sufficiently small regular n-gons with n≥ 7 their boundaries without a longest side are Steiner minimal trees. Bibliography: 22 titles.

  8. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    Science.gov (United States)

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  9. Deterministic sensitivity analysis for the numerical simulation of contaminants transport

    International Nuclear Information System (INIS)

    Marchand, E.

    2007-12-01

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  10. Use of deterministic methods in survey calculations for criticality problems

    International Nuclear Information System (INIS)

    Hutton, J.L.; Phenix, J.; Course, A.F.

    1991-01-01

    A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)

  11. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.; Asmis, G.J.K.

    1983-01-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rain fall runoff model may lead in some cases to non-conservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 to 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  12. Deterministic secure communications using two-mode squeezed states

    International Nuclear Information System (INIS)

    Marino, Alberto M.; Stroud, C. R. Jr.

    2006-01-01

    We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state

  13. New methods for solving a vertex p-center problem with uncertain demand-weighted distance: A real case study

    Directory of Open Access Journals (Sweden)

    Javad Nematian

    2015-04-01

    Full Text Available Vertex and p-center problems are two well-known types of the center problem. In this paper, a p-center problem with uncertain demand-weighted distance will be introduced in which the demands are considered as fuzzy random variables (FRVs and the objective of the problem is to minimize the maximum distance between a node and its nearest facility. Then, by introducing new methods, the proposed problem is converted to deterministic integer programming (IP problems where these methods will be obtained through the implementation of the possibility theory and fuzzy random chance-constrained programming (FRCCP. Finally, the proposed methods are applied for locating bicycle stations in the city of Tabriz in Iran as a real case study. The computational results of our study show that these methods can be implemented for the center problem with uncertain frameworks.

  14. Autonomous choices among deterministic evolution-laws as source of uncertainty

    Science.gov (United States)

    Trujillo, Leonardo; Meyroneinc, Arnaud; Campos, Kilver; Rendón, Otto; Sigalotti, Leonardo Di G.

    2018-03-01

    We provide evidence of an extreme form of sensitivity to initial conditions in a family of one-dimensional self-ruling dynamical systems. We prove that some hyperchaotic sequences are closed-form expressions of the orbits of these pseudo-random dynamical systems. Each chaotic system in this family exhibits a sensitivity to initial conditions that encompasses the sequence of choices of the evolution rule in some collection of maps. This opens a possibility to extend current theories of complex behaviors on the basis of intrinsic uncertainty in deterministic chaos.

  15. Enhanced deterministic phase retrieval using a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa

    2012-01-01

    A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...

  16. Electronic spectrum of a deterministic single-donor device in silicon

    International Nuclear Information System (INIS)

    Fuechsle, Martin; Miwa, Jill A.; Mahapatra, Suddhasatta; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2013-01-01

    We report the fabrication of a single-electron transistor (SET) based on an individual phosphorus dopant that is deterministically positioned between the dopant-based electrodes of a transport device in silicon. Electronic characterization at mK-temperatures reveals a charging energy that is very similar to the value expected for isolated P donors in a bulk Si environment. Furthermore, we find indications for bulk-like one-electron excited states in the co-tunneling spectrum of the device, in sharp contrast to previous reports on transport through single dopants

  17. On the effect of deterministic terms on the bias in stable AR models

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2004-01-01

    This paper compares the first-order bias approximation for the autoregressive (AR) coefficients in stable AR models in the presence of deterministic terms. It is shown that the bias due to inclusion of an intercept and trend is twice as large as the bias due to an intercept. For the AR(1) model, the

  18. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  19. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, H

    2000-02-01

    Monte Carlo technique is that the solutions are given at specific locations only, are statistically fluctuating and are arrived at with lots of computer effort. Sooner rather than later, however, one would expect that powerful variance reductions and ever-faster processor machines would balance these disadvantages out. This is especially true if one considers the rapid advances in computer technology and parallel computers, which can achieve a 300, fold faster convergence. In many fields and cases the user would, however, benefit greatly by considering when possible alternative methods to the Monte Carlo technique, such as deterministic methods, at least as a way of validation. It can be shown in fact, that for less complex problems a deterministic approach can have many advantages. In its earlier manifestations, Monte Carlo simulation was primarily performed by experts who were intimately involved in the development of the computer code. Increasingly, however, codes are being supplied as relatively user-friendly packages for widespread use, which allows them to be used by those with less specialist knowledge. This enables them to be used as 'black boxes', which in turn provides scope for costly errors, especially in the choice of cross section data and accelerator techniques. The Monte Carlo method as employed with modem computers goes back several decades, and nowadays science and software libraries would be virtually empty if one excluded work that is either directly or indirectly related to this technique. This is specifically true in the fields of 'computational dosimetry', 'radiation protection' and radiation transport in general. Hundreds of codes have been written and applied with various degrees of success. Some of these have become trademarks, generally well supported and taken over by the thousands of users. Other codes, which should be encouraged, are the so-called in house codes, which still serve well their developers

  20. Minimally invasive treatment of pilon fractures with a low profile plate: preliminary results in 17 cases.

    Science.gov (United States)

    Borens, Olivier; Kloen, Peter; Richmond, Jeffrey; Roederer, Goetz; Levine, David S; Helfet, David L

    2009-05-01

    To determine the results of "biologic fixation" with a minimally invasive plating technique using a newly designed low profile "Scallop" plate in the treatment of pilon fractures. Retrospective case series. A tertiary referral center. Seventeen patients were treated between 1999 and 2001 for a tibial plafond fracture at the Hospital for Special Surgery with a newly designed low-profile plate. Eleven of the fractures (65%) were high-energy injuries. Two fractures were open. Staged surgical treatment with open reduction and fixation of the fibular fracture and application of an external fixator was performed in 12 cases. As soon as the soft tissues and swelling allowed, i.e. skin wrinkling, the articular surface was reconstructed and simply reduced, if necessary through an small incision, and the articular block was fixed to the diaphysis using a medially placed, percutaneously introduced flat scallop plate. In the remaining five cases the operation was performed in one session. Time to healing and complications including delayed union, non-union, instrument failure, loss of fixation, infection, quality of reduction and number of reoperations were evaluated. Quality of results and outcome were graded using the ankle-hindfoot-scale and a modified rating system. All patients went on to bony union at an average time of 14 weeks. There were no plate failures or loss of fixation/reduction. Two superficial wound-healing problems resolved with local wound care. At an average follow up of 17 months (range 6-29 months) eight patients (47%) had an excellent result; seven (41%) had a fair result whereas two (12%) had a poor result. The average ankle-hindfoot-score was 86.1 (range 61-100). Four patients have had the hardware removed and one of them is awaiting an ankle arthrodesis. Based on these initial results, it appears that a minimally invasive surgical technique including new low profile plate can decrease soft tissue problems while leading to fracture healing and

  1. Flow Forecasting using Deterministic Updating of Water Levels in Distributed Hydrodynamic Urban Drainage Models

    DEFF Research Database (Denmark)

    Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne

    2014-01-01

    drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...

  2. How the growth rate of host cells affects cancer risk in a deterministic way

    Science.gov (United States)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  3. Minimal knotted polygons in cubic lattices

    International Nuclear Information System (INIS)

    Van Rensburg, E J Janse; Rechnitzer, A

    2011-01-01

    In this paper we examine numerically the properties of minimal length knotted lattice polygons in the simple cubic, face-centered cubic, and body-centered cubic lattices by sieving minimal length polygons from a data stream of a Monte Carlo algorithm, implemented as described in Aragão de Carvalho and Caracciolo (1983 Phys. Rev. B 27 1635), Aragão de Carvalho et al (1983 Nucl. Phys. B 215 209) and Berg and Foester (1981 Phys. Lett. B 106 323). The entropy, mean writhe, and mean curvature of minimal length polygons are computed (in some cases exactly). While the minimal length and mean curvature are found to be lattice dependent, the mean writhe is found to be only weakly dependent on the lattice type. Comparison of our results to numerical results for the writhe obtained elsewhere (see Janse van Rensburg et al 1999 Contributed to Ideal Knots (Series on Knots and Everything vol 19) ed Stasiak, Katritch and Kauffman (Singapore: World Scientific), Portillo et al 2011 J. Phys. A: Math. Theor. 44 275004) shows that the mean writhe is also insensitive to the length of a knotted polygon. Thus, while these results for the mean writhe and mean absolute writhe at minimal length are not universal, our results demonstrate that these values are quite close the those of long polygons regardless of the underlying lattice and length

  4. Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?

    Science.gov (United States)

    Plag, H.; Earnest, D.; Jules-Plag, S.

    2013-12-01

    Our response to climate change is dominated by a deterministic approach that emphasizes the interaction between only the natural and the built environment. But in the non-ergodic world of unprecedented climate change, social factors drive recovery from unforeseen Black Swans much more than natural or built ones. Particularly the sea level rise discussion focuses on deterministic predictions, accounting for uncertainties in major driving processes with a set of forcing scenarios and public deliberations on which of the plausible trajectories is most likely. Science focuses on the prediction of future climate change, and policies focus on mitigation of both climate change itself and its impacts. The deterministic approach is based on two basic assumptions: 1) Climate change is an ergodic process; 2) The urban coast is a robust system. Evidence suggests that these assumptions may not hold. Anthropogenic changes are pushing key parameters of the climate system outside of the natural range of variability from the last 1 Million years, creating the potential for environmental Black Swans. A probabilistic approach allows for non-ergodic processes and focuses more on resilience, hence does not depend on the two assumptions. Recent experience with hurricanes revealed threshold limitations of the built environment of the urban coast, which, once exceeded, brought to the forefront the importance of the social fabric and social networking in evaluating resilience. Resilience strongly depends on social capital, and building social capital that can create resilience must be a key element in our response to climate change. Although social capital cannot mitigate hazards, social scientists have found that communities rich in strong norms of cooperation recover more quickly than communities without social capital. There is growing evidence that the built environment can affect the social capital of a community, for example public health and perceptions of public safety. This

  5. A Novel Minimally Invasive Reduction Technique by Balloon and Distractor for Intra-Articular Calcaneal Fractures: A Report of 2 Cases

    Directory of Open Access Journals (Sweden)

    M. Prod’homme

    2018-01-01

    Full Text Available Treatment of displaced intra-articular fractures of the calcaneus remains a challenge for the orthopaedic surgeon. Conservative therapy is known to produce functional impairment. Surgical approach is plagued by soft-tissue complications and insufficient fracture reduction. We describe a minimally invasive technique that will hopefully improve these issues. We want to present our first experience through two cases. The first was a 46-year-old man who presented with a Sanders type IIBC calcaneal fracture, and the second was a 86-year-old woman with a type IIIBC calcaneal fracture. We introduced 2 Schanz screws in the talus and the calcaneus. After distraction, we introduced an inflatable balloon inside the calcaneus. By inflating the balloon, the articular surface was reduced by lifting it up. Then bone cement was injected in order to maintain the reduction. Additional screw fixation was used in the young patient. Postoperative imaging showed good congruence of the subtalar joint without leakage of cement, for the two cases. After 2 months, the patients had no pain and were without soft-tissue complications. We advocate this technique to perform a minimally invasive reduction and fixation of intra-articular calcaneal fractures because it preserves soft-tissues and provides good clinical results with early weight-bearing.

  6. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  7. Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case

    Directory of Open Access Journals (Sweden)

    Królak Andrzej

    2005-03-01

    Full Text Available The article reviews the statistical theory of signal detection in application to analysis of deterministic gravitational-wave signals in the noise of a detector. Statistical foundations for the theory of signal detection and parameter estimation are presented. Several tools needed for both theoretical evaluation of the optimal data analysis methods and for their practical implementation are introduced. They include optimal signal-to-noise ratio, Fisher matrix, false alarm and detection probabilities, F-statistic, template placement, and fitting factor. These tools apply to the case of signals buried in a stationary and Gaussian noise. Algorithms to efficiently implement the optimal data analysis techniques are discussed. Formulas are given for a general gravitational-wave signal that includes as special cases most of the deterministic signals of interest.

  8. Gravitational-Wave Data Analysis. Formalism and Sample Applications: The Gaussian Case

    Directory of Open Access Journals (Sweden)

    Piotr Jaranowski

    2012-03-01

    Full Text Available The article reviews the statistical theory of signal detection in application to analysis of deterministic gravitational-wave signals in the noise of a detector. Statistical foundations for the theory of signal detection and parameter estimation are presented. Several tools needed for both theoretical evaluation of the optimal data analysis methods and for their practical implementation are introduced. They include optimal signal-to-noise ratio, Fisher matrix, false alarm and detection probabilities, ℱ-statistic, template placement, and fitting factor. These tools apply to the case of signals buried in a stationary and Gaussian noise. Algorithms to efficiently implement the optimal data analysis techniques are discussed. Formulas are given for a general gravitational-wave signal that includes as special cases most of the deterministic signals of interest.

  9. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    Science.gov (United States)

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  10. A Hybrid Genetic Algorithm to Minimize Total Tardiness for Unrelated Parallel Machine Scheduling with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Chunfeng Liu

    2013-01-01

    Full Text Available The paper presents a novel hybrid genetic algorithm (HGA for a deterministic scheduling problem where multiple jobs with arbitrary precedence constraints are processed on multiple unrelated parallel machines. The objective is to minimize total tardiness, since delays of the jobs may lead to punishment cost or cancellation of orders by the clients in many situations. A priority rule-based heuristic algorithm, which schedules a prior job on a prior machine according to the priority rule at each iteration, is suggested and embedded to the HGA for initial feasible schedules that can be improved in further stages. Computational experiments are conducted to show that the proposed HGA performs well with respect to accuracy and efficiency of solution for small-sized problems and gets better results than the conventional genetic algorithm within the same runtime for large-sized problems.

  11. Deterministic and stochastic control of chimera states in delayed feedback oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)

    2016-06-08

    Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.

  12. Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Sacit M [ORNL; Kisner, Roger A [ORNL; Muhlheim, Michael David [ORNL; Fugate, David L [ORNL

    2015-07-01

    Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The

  13. A Study on Efficiency Improvement of the Hybrid Monte Carlo/Deterministic Method for Global Transport Problems

    International Nuclear Information System (INIS)

    Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2017-01-01

    In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.

  14. Hazardous waste minimization tracking system

    International Nuclear Information System (INIS)

    Railan, R.

    1994-01-01

    Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

  15. Measures of thermodynamic irreversibility in deterministic and stochastic dynamics

    International Nuclear Information System (INIS)

    Ford, Ian J

    2015-01-01

    It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)

  16. Realization of deterministic quantum teleportation with solid state qubits

    International Nuclear Information System (INIS)

    Andreas Wallfraff

    2014-01-01

    Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)

  17. Deterministic Earthquake Hazard Assessment by Public Agencies in California

    Science.gov (United States)

    Mualchin, L.

    2005-12-01

    Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.

  18. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  19. The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy

    Directory of Open Access Journals (Sweden)

    Daniel Chicharro

    2018-03-01

    Full Text Available Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010 proposed a partial information decomposition (PID that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013 proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012 showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed.

  20. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  1. Control-oriented approaches to anticipating synchronization of chaotic deterministic ratchets

    Energy Technology Data Exchange (ETDEWEB)

    Xu Shiyun [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)], E-mail: xushiyun@pku.edu.cn; Yang Ying [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)], E-mail: yy@mech.pku.edu.cn; Song Lei [State Key Lab for Turbulence and Complex Systems, Department of Mechanics and Aerospace Engineering, College of Engineering, Peking University, Beijing 100871 (China)

    2009-06-15

    In virtue of techniques derived from nonlinear control system theory, we establish conditions under which one could obtain anticipating synchronization between two periodically driven deterministic ratchets that are able to exhibit directed transport with a finite velocity. Criteria are established in order to guarantee the anticipating synchronization property of such systems as well as characterize phase space dynamics of the ratchet transporting behaviors. These results allow one to predict the chaotic direct transport features of particles on a ratchet potential using a copy of the same system that performs as a slave, which are verified through numerical simulation.

  2. Mechanics from Newton's laws to deterministic chaos

    CERN Document Server

    Scheck, Florian

    2018-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science.   Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics.   The book contains more than 150 problems ...

  3. Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal

    DEFF Research Database (Denmark)

    Turajlic, Samra; Xu, Hang; Litchfield, Kevin

    2018-01-01

    The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show...... that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors...... outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance....

  4. Stochastic and deterministic models to evaluate the critical distance of a near surface repository for the disposal of intermediate and low level radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Alves, A.S.M., E-mail: asergi@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Melo, P.F. Frutuoso e, E-mail: frutuoso@nuclear.ufrj.br [Graduate Program of Nuclear Engineering, COPPE, Federal University of Rio de Janeiro, Av. Horácio Macedo 2030, Bloco G, sala 206, 21941-914 Rio de Janeiro, RJ (Brazil); Passos, E.M., E-mail: epassos@eletronuclear.gov.br [Eletrobrás Termonuclear – Eletronuclear S.A. , Rua da Candelária 65, 7° andar, GSN.T, 20091-906 Rio de Janeiro, RJ (Brazil); Fontes, G.S., E-mail: gsfontes@hotmail.com [Instituto Militar de Engenharia – IME, Praça General Tibúrcio 80, 22290-270 Rio de Janeiro, RJ (Brazil)

    2015-06-15

    Highlights: • The water infiltration scenario is evaluated for a near surface repository. • The main objective is the determination of the critical distance of the repository. • The column liquid height in the repository is governed by an Ito stochastic equation. • Practical results are obtained for the Abadia de Goiás repository in Brazil. - Abstract: The aim of this paper is to present the stochastic and deterministic models developed for the evaluation of the critical distance of a near surface repository for the disposal of intermediate (ILW) and low level (LLW) radioactive wastes. The critical distance of a repository is defined as the distance between the repository and a well in which the water activity concentration is able to cause a radiological dose to a member of the public equal to the dose limit set by the regulatory body. The mathematical models are developed based on the Richards equation for the liquid flow in the porous media and on the solute transport equation in this medium. The release of radioactive material from the repository to the environment is considered through its base and its flow is determined by Darcy's Law. The deterministic model is obtained from the stochastic approach by neglecting the influence of the Gaussian white noise on the rainfall and the equations are solved analytically with the help of conventional calculus (non-stochastic calculus). The equations of the stochastic model are solved analytically based on the Ito stochastic calculus and numerically by using the Euler–Maruyama method. The impact on the value of the critical distance of the Abadia de Goiás repository is analyzed, taken as a study case, when the deterministic methodology is replaced by the stochastic one, considered more appropriate for modeling rainfall as a stochastic process.

  5. Efficiency of transport in periodic potentials: dichotomous noise contra deterministic force

    Science.gov (United States)

    Spiechowicz, J.; Łuczka, J.; Machura, L.

    2016-05-01

    We study the transport of an inertial Brownian particle moving in a symmetric and periodic one-dimensional potential, and subjected to both a symmetric, unbiased external harmonic force as well as biased dichotomic noise η (t) also known as a random telegraph signal or a two state continuous-time Markov process. In doing so, we concentrate on the previously reported regime (Spiechowicz et al 2014 Phys. Rev. E 90 032104) for which non-negative biased noise η (t) in the form of generalized white Poissonian noise can induce anomalous transport processes similar to those generated by a deterministic constant force F= but significantly more effective than F, i.e. the particle moves much faster, the velocity fluctuations are noticeably reduced and the transport efficiency is enhanced several times. Here, we confirm this result for the case of dichotomous fluctuations which, in contrast to white Poissonian noise, can assume positive as well as negative values and examine the role of thermal noise in the observed phenomenon. We focus our attention on the impact of bidirectionality of dichotomous fluctuations and reveal that the effect of nonequilibrium noise enhanced efficiency is still detectable. This result may explain transport phenomena occurring in strongly fluctuating environments of both physical and biological origin. Our predictions can be corroborated experimentally by use of a setup that consists of a resistively and capacitively shunted Josephson junction.

  6. Balancing related methods for minimal realization of periodic systems

    OpenAIRE

    Varga, A.

    1999-01-01

    We propose balancing related numerically reliable methods to compute minimal realizations of linear periodic systems with time-varying dimensions. The first method belongs to the family of square-root methods with guaranteed enhanced computational accuracy and can be used to compute balanced minimal order realizations. An alternative balancing-free square-root method has the advantage of a potentially better numerical accuracy in case of poorly scaled original systems. The key numerical co...

  7. A case-study of landfill minimization and material recovery via waste co-gasification in a new waste management scheme.

    Science.gov (United States)

    Tanigaki, Nobuhiro; Ishida, Yoshihiro; Osada, Morihiro

    2015-03-01

    This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the advantage of the co-gasification system has. The co-gasification was beneficial for landfill cost in the range of 80 Euro per ton or more. Higher power prices led to lower operation cost in each case. The inert contents in processed waste had a significant influence on the operating cost. These results indicate that co-gasification of bottom ash and incombustibles with municipal solid waste contributes to minimizing the final landfill amount and has

  8. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    Science.gov (United States)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  9. Deterministic Construction of Plasmonic Heterostructures in Well-Organized Arrays for Nanophotonic Materials.

    Science.gov (United States)

    Liu, Xiaoying; Biswas, Sushmita; Jarrett, Jeremy W; Poutrina, Ekaterina; Urbas, Augustine; Knappenberger, Kenneth L; Vaia, Richard A; Nealey, Paul F

    2015-12-02

    Plasmonic heterostructures are deterministically constructed in organized arrays through chemical pattern directed assembly, a combination of top-down lithography and bottom-up assembly, and by the sequential immobilization of gold nanoparticles of three different sizes onto chemically patterned surfaces using tailored interaction potentials. These spatially addressable plasmonic chain nanostructures demonstrate localization of linear and nonlinear optical fields as well as nonlinear circular dichroism. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Theory and application of deterministic multidimensional pointwise energy lattice physics methods

    International Nuclear Information System (INIS)

    Zerkle, M.L.

    1999-01-01

    The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem

  11. Deterministic and Storable Single-Photon Source Based on a Quantum Memory

    International Nuclear Information System (INIS)

    Chen Shuai; Chen, Y.-A.; Strassel, Thorsten; Zhao Bo; Yuan Zhensheng; Pan Jianwei; Schmiedmayer, Joerg

    2006-01-01

    A single-photon source is realized with a cold atomic ensemble ( 87 Rb atoms). A single excitation, written in an atomic quantum memory by Raman scattering of a laser pulse, is retrieved deterministically as a single photon at a predetermined time. It is shown that the production rate of single photons can be enhanced considerably by a feedback circuit while the single-photon quality is conserved. Such a single-photon source is well suited for future large-scale realization of quantum communication and linear optical quantum computation

  12. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  13. Adoption of waste minimization technology to benefit electroplaters

    Energy Technology Data Exchange (ETDEWEB)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K. [Hong Kong Productivity Council, Kowloon (Hong Kong)

    1996-12-31

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution control measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.

  14. Complications of Minimally Invasive, Tubular Access Surgery for Cervical, Thoracic, and Lumbar Surgery

    Directory of Open Access Journals (Sweden)

    Donald A. Ross

    2014-01-01

    Full Text Available The object of the study was to review the author’s large series of minimally invasive spine surgeries for complication rates. The author reviewed a personal operative database for minimally access spine surgeries done through nonexpandable tubular retractors for extradural, nonfusion procedures. Consecutive cases (n=1231 were reviewed for complications. There were no wound infections. Durotomy occurred in 33 cases (2.7% overall or 3.4% of lumbar cases. There were no external or symptomatic internal cerebrospinal fluid leaks or pseudomeningoceles requiring additional treatment. The only motor injuries were 3 C5 root palsies, 2 of which resolved. Minimally invasive spine surgery performed through tubular retractors can result in a low wound infection rate when compared to open surgery. Durotomy is no more common than open procedures and does not often result in the need for secondary procedures. New neurologic deficits are uncommon, with most observed at the C5 root. Minimally invasive spine surgery, even without benefits such as less pain or shorter hospital stays, can result in considerably lower complication rates than open surgery.

  15. Universal and Deterministic Manipulation of the Quantum State of Harmonic Oscillators: A Route to Unitary Gates for Fock State Qubits

    International Nuclear Information System (INIS)

    Santos, Marcelo Franca

    2005-01-01

    We present a simple quantum circuit that allows for the universal and deterministic manipulation of the quantum state of confined harmonic oscillators. The scheme is based on the selective interactions of the referred oscillator with an auxiliary three-level system and a classical external driving source, and enables any unitary operations on Fock states, two by two. One circuit is equivalent to a single qubit unitary logical gate on Fock states qubits. Sequences of similar protocols allow for complete, deterministic, and state-independent manipulation of the harmonic oscillator quantum state

  16. Chiral lattice fermions, minimal doubling, and the axial anomaly

    International Nuclear Information System (INIS)

    Tiburzi, B. C.

    2010-01-01

    Exact chiral symmetry at finite lattice spacing would preclude the axial anomaly. In order to describe a continuum quantum field theory of Dirac fermions, lattice actions with purported exact chiral symmetry must break the flavor-singlet axial symmetry. We demonstrate that this is indeed the case by using a minimally doubled fermion action. For simplicity, we consider the Abelian axial anomaly in two dimensions. At finite lattice spacing and with gauge interactions, the axial anomaly arises from nonconservation of the flavor-singlet current. Similar nonconservation also leads to the axial anomaly in the case of the naieve lattice action. For minimally doubled actions, however, fine-tuning of the action and axial current is necessary to arrive at the anomaly. Conservation of the flavor nonsinglet vector current additionally requires the current to be fine-tuned. Finally, we determine that the chiral projection of a minimally doubled fermion action can be used to arrive at a lattice theory with an undoubled Dirac fermion possessing the correct anomaly in the continuum limit.

  17. Robust statistics for deterministic and stochastic gravitational waves in non-Gaussian noise. II. Bayesian analyses

    International Nuclear Information System (INIS)

    Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.

    2003-01-01

    In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term

  18. SU-C-BRC-03: Development of a Novel Strategy for On-Demand Monte Carlo and Deterministic Dose Calculation Treatment Planning and Optimization for External Beam Photon and Particle Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Y M; Bush, K; Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2016-06-15

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high

  19. SU-C-BRC-03: Development of a Novel Strategy for On-Demand Monte Carlo and Deterministic Dose Calculation Treatment Planning and Optimization for External Beam Photon and Particle Therapy

    International Nuclear Information System (INIS)

    Yang, Y M; Bush, K; Han, B; Xing, L

    2016-01-01

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high

  20. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  1. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    Science.gov (United States)

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  2. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar; Alouini, Mohamed-Slim; Chen, Yunfei; Radaydeh, Redha M.

    2012-01-01

    outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  3. Statistics of energy levels and zero temperature dynamics for deterministic spin models with glassy behaviour

    NARCIS (Netherlands)

    Degli Esposti, M.; Giardinà, C.; Graffi, S.; Isola, S.

    2001-01-01

    We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e.,

  4. Deterministic sensitivity analysis of two-phase flow systems: forward and adjoint methods. Final report

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1984-07-01

    This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations

  5. Development and application of a new deterministic method for calculating computer model result uncertainties

    International Nuclear Information System (INIS)

    Maerker, R.E.; Worley, B.A.

    1989-01-01

    Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis

  6. Experimental demonstration of deterministic one-way quantum computing on a NMR quantum computer

    OpenAIRE

    Ju, Chenyong; Zhu, Jing; Peng, Xinhua; Chong, Bo; Zhou, Xianyi; Du, Jiangfeng

    2008-01-01

    One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report the first experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.

  7. Electric field control of deterministic current-induced magnetization switching in a hybrid ferromagnetic/ferroelectric structure

    Science.gov (United States)

    Cai, Kaiming; Yang, Meiyin; Ju, Hailang; Wang, Sumei; Ji, Yang; Li, Baohe; Edmonds, Kevin William; Sheng, Yu; Zhang, Bao; Zhang, Nan; Liu, Shuai; Zheng, Houzhi; Wang, Kaiyou

    2017-07-01

    All-electrical and programmable manipulations of ferromagnetic bits are highly pursued for the aim of high integration and low energy consumption in modern information technology. Methods based on the spin-orbit torque switching in heavy metal/ferromagnet structures have been proposed with magnetic field, and are heading toward deterministic switching without external magnetic field. Here we demonstrate that an in-plane effective magnetic field can be induced by an electric field without breaking the symmetry of the structure of the thin film, and realize the deterministic magnetization switching in a hybrid ferromagnetic/ferroelectric structure with Pt/Co/Ni/Co/Pt layers on PMN-PT substrate. The effective magnetic field can be reversed by changing the direction of the applied electric field on the PMN-PT substrate, which fully replaces the controllability function of the external magnetic field. The electric field is found to generate an additional spin-orbit torque on the CoNiCo magnets, which is confirmed by macrospin calculations and micromagnetic simulations.

  8. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    Science.gov (United States)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  9. Location deterministic biosensing from quantum-dot-nanowire assemblies

    International Nuclear Information System (INIS)

    Liu, Chao; Kim, Kwanoh; Fan, D. L.

    2014-01-01

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.

  10. Minimal string theories and integrable hierarchies

    Science.gov (United States)

    Iyer, Ramakrishnan

    Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non

  11. Clinic research on the treatment for humeral shaft fracture with minimal invasive plate osteosynthesis: a retrospective study of 128 cases.

    Science.gov (United States)

    Chen, H; Hu, X; Yang, G; Xiang, M

    2017-04-01

    Minimal invasive plate osteosynthesis (MIPO) is one of the most important techniques in the treatment for humeral shaft fractures. This study was performed to evaluate the efficacy of MIPO technique for the treatment for humeral shaft fractures. We retrospectively evaluated 128 cases with humeral shaft fractures that were treated with MIPO technique from March 2005 to August 2008. All the patients were followed up by routine radiological imaging and clinical examinations. Constant-Murley score and HSS elbow joint score were used to evaluate the treatment outcome. The average duration of the surgery was 60 min (range 40-95 min) without blood transfusion. All fractures healed without infection. All cases recovered carrying angle except four cases with 10°-15° cubitus varus. After the average follow-up of 23 (13-38) months, satisfactory function was achieved according to Constant-Murley score and HSS elbow joint score. Constant-Murley score was 80 on average (range 68-91). According to HSS elbow joint score, there were 123 cases of excellent clinical outcome and five cases of effective outcome. It seems to be a safe and effective method for managing humeral shaft fractures with MIPO technique.

  12. Minimally invasive spine surgery: Hurdles to be crossed

    Directory of Open Access Journals (Sweden)

    Mahesh Bijjawara

    2014-01-01

    Full Text Available MISS as a concept is noble and all surgeons need to address and minimize the surgical morbidity for better results. However, we need to be cautions and not fall prey into accepting that minimally invasive spine surgery can be done only when certain metal access systems are used. Minimally invasive spine surgery (MISS has come a long way since the description of endoscopic discectomy in 1997 and minimally invasive TLIF (mTLIF in 2003. Today there is credible evidence (though not level-I that MISS has comparable results to open spine surgery with the advantage of early postoperative recovery and decreased blood loss and infection rates. However, apart from decreasing the muscle trauma and decreasing the muscle dissection during multilevel open spinal instrumentation, there has been little contribution to address the other morbidity parameters like operative time , blood loss , access to decompression and atraumatic neural tissue handling with the existing MISS technologies. Since all these parameters contribute to a greater degree than posterior muscle trauma for the overall surgical morbidity, we as surgeons need to introspect before we accept the concept of minimally invasive spine surgery being reduced to surgeries performed with a few tubular retractors. A spine surgeon needs to constantly improve his skills and techniques so that he can minimize blood loss, minimize traumatic neural tissue handling and minimizing operative time without compromising on the surgical goals. These measures actually contribute far more, to decrease the morbidity than approach related muscle damage alone. Minimally invasine spine surgery , though has come a long way, needs to provide technical solutions to minimize all the morbidity parameters involved in spine surgery, before it can replace most of the open spine surgeries, as in the case of laparoscopic surgery or arthroscopic surgery.

  13. Foundation plate on the elastic half-space, deterministic and probabilistic approach

    Directory of Open Access Journals (Sweden)

    Tvrdá Katarína

    2017-01-01

    Full Text Available Interaction between the foundation plate and subgrade can be described by different mathematical - physical model. Elastic foundation can be modelled by different types of models, e.g. one-parametric model, two-parametric model and a comprehensive model - Boussinesque (elastic half-space had been used. The article deals with deterministic and probabilistic analysis of deflection of the foundation plate on the elastic half-space. Contact between the foundation plate and subsoil was modelled using contact elements node-node. At the end the obtained results are presented.

  14. Nodal deterministic simulation for problems of neutron shielding in multigroup formulation

    International Nuclear Information System (INIS)

    Baptista, Josue Costa; Heringer, Juan Diego dos Santos; Santos, Luiz Fernando Trindade; Alves Filho, Hermes

    2013-01-01

    In this paper, we propose the use of some computational tools, with the implementation of numerical methods SGF (Spectral Green's Function), making use of a deterministic model of transport of neutral particles in the study and analysis of a known and simplified problem of nuclear engineering, known in the literature as a problem of neutron shielding, considering the model with two energy groups. These simulations are performed in MatLab platform, version 7.0, and are presented and developed with the help of a Computer Simulator providing a friendly computer application for their utilities

  15. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan-Mariano de Goyeneche

    2009-05-01

    Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.

  16. Application of the neo-deterministic seismic microzonation procedure in Bulgaria and validation of the seismic input against Eurocode 8

    International Nuclear Information System (INIS)

    Paskaleva, I.; Kouteva, M.; Vaccari, F.; Panza, G.F.

    2008-03-01

    The earthquake record and the Code for design and construction in seismic regions in Bulgaria have shown that the territory of the Republic of Bulgaria is exposed to a high seismic risk due to local shallow and regional strong intermediate-depth seismic sources. The available strong motion database is quite limited, and therefore not representative at all of the real hazard. The application of the neo-deterministic seismic hazard assessment procedure for two main Bulgarian cities has been capable to supply a significant database of synthetic strong motions for the target sites, applicable for earthquake engineering purposes. The main advantage of the applied deterministic procedure is the possibility to take simultaneously and correctly into consideration the contribution to the earthquake ground motion at the target sites of the seismic source and of the seismic wave propagation in the crossed media. We discuss in this study the result of some recent applications of the neo-deterministic seismic microzonation procedure to the cities of Sofia and Russe. The validation of the theoretically modeled seismic input against Eurocode 8 and the few available records at these sites is discussed. (author)

  17. Hazardous waste transportation risk assessment: Benefits of a combined deterministic and probabilistic Monte Carlo approach in expressing risk uncertainty

    International Nuclear Information System (INIS)

    Policastro, A.J.; Lazaro, M.A.; Cowen, M.A.; Hartmann, H.M.; Dunn, W.E.; Brown, D.F.

    1995-01-01

    This paper presents a combined deterministic and probabilistic methodology for modeling hazardous waste transportation risk and expressing the uncertainty in that risk. Both the deterministic and probabilistic methodologies are aimed at providing tools useful in the evaluation of alternative management scenarios for US Department of Energy (DOE) hazardous waste treatment, storage, and disposal (TSD). The probabilistic methodology can be used to provide perspective on and quantify uncertainties in deterministic predictions. The methodology developed has been applied to 63 DOE shipments made in fiscal year 1992, which contained poison by inhalation chemicals that represent an inhalation risk to the public. Models have been applied to simulate shipment routes, truck accident rates, chemical spill probabilities, spill/release rates, dispersion, population exposure, and health consequences. The simulation presented in this paper is specific to trucks traveling from DOE sites to their commercial TSD facilities, but the methodology is more general. Health consequences are presented as the number of people with potentially life-threatening health effects. Probabilistic distributions were developed (based on actual item data) for accident release amounts, time of day and season of the accident, and meteorological conditions

  18. A deterministic model predicts the properties of stochastic calcium oscillations in airway smooth muscle cells.

    Science.gov (United States)

    Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James

    2014-08-01

    The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.

  19. Deterministic quantum controlled-PHASE gates based on non-Markovian environments

    Science.gov (United States)

    Zhang, Rui; Chen, Tian; Wang, Xiang-Bin

    2017-12-01

    We study the realization of the quantum controlled-PHASE gate in an atom-cavity system beyond the Markovian approximation. The general description of the dynamics for the atom-cavity system without any approximation is presented. When the spectral density of the reservoir has the Lorentz form, by making use of the memory backflow from the reservoir, we can always construct the deterministic quantum controlled-PHASE gate between a photon and an atom, no matter the atom-cavity coupling strength is weak or strong. While, the phase shift in the output pulse hinders the implementation of quantum controlled-PHASE gates in the sub-Ohmic, Ohmic or super-Ohmic reservoirs.

  20. Surgical case volume in Canadian urology residency: a comparison of trends in open and minimally invasive surgical experience.

    Science.gov (United States)

    Mamut, Adiel E; Afshar, Kourosh; Mickelson, Jennifer J; Macneily, Andrew E

    2011-06-01

    The application of minimally invasive surgery (MIS) has become increasingly common in urology training programs and clinical practice. Our objective was to review surgical case data from all 12 Canadian residency programs to identify trends in resident exposure to MIS and open procedures. Every year, beginning in 2003, an average of 41 postgraduate year 3 to 5 residents reported surgical case data to a secure internet relational database. Data were anonymized and extracted for the period 2003 to 2009 by measuring a set of 11 predefined index cases that could be performed in both an open and MIS fashion. 16,687 index cases were recorded by a total of 198 residents. As a proportion, there was a significant increase in MIS from 12% in 2003 to 2004 to 32% in 2008 to 2009 (P=0.01). A significant decrease in the proportion of index cases performed with an open approach was also observed from 88% in 2003 to 2004 to 68% in 2008 to 2009 (P=0.01). The majority of these shifts were secondary to the increased application of MIS for nephrectomies of all type (29%-45%), nephroureterectomy (27%-76%), adrenalectomy (15%-71%), and pyeloplasty (17%-54%) (Pfashion during the study period. MIS constitutes an increasingly significant component of surgical volume in Canadian urology residencies with a reciprocal decrease in exposure to open surgery. These trends necessitate ongoing evaluation to maintain the integrity of postgraduate urologic training.