Deterministic methods in radiation transport
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Method to deterministically study photonic nanostructures in different experimental instruments
Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Non deterministic methods for charged particle transport
Besnard, D.C.; Buresi, E.; Hermeline, F.; Wagon, F.
1985-04-01
The coupling of Monte-Carlo methods for solving Fokker Planck equation with ICF inertial confinement fusion codes requires them to be economical and to preserve gross conservation properties. Besides, the presence in FPE Fokker-Planck equation of diffusion terms due to collisions between test particles and the background plasma challenges standard M.C. (Monte-Carlo) techniques if this phenomenon is dominant. We address these problems through the use of a fixed mesh in phase space which allows us to handle highly variable sources, avoiding any Russian Roulette for lowering the size of the sample. Also on this mesh are solved diffusion equations obtained from a splitting of FPE. Any non linear diffusion terms of FPE can be handled in this manner. Another method, also presented here is to use a direct particle method for solving the full FPE
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Are deterministic methods suitable for short term reserve planning?
Voorspools, Kris R.; D'haeseleer, William D.
2005-01-01
Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods
Applicability of deterministic methods in seismic site effects modeling
Cioflan, C.O.; Radulian, M.; Apostol, B.F.; Ciucu, C.
2005-01-01
The up-to-date information related to local geological structure in the Bucharest urban area has been integrated in complex analyses of the seismic ground motion simulation using deterministic procedures. The data recorded for the Vrancea intermediate-depth large earthquakes are supplemented with synthetic computations all over the city area. The hybrid method with a double-couple seismic source approximation and a relatively simple regional and local structure models allows a satisfactory reproduction of the strong motion records in the frequency domain (0.05-1)Hz. The new geological information and a deterministic analytical method which combine the modal summation technique, applied to model the seismic wave propagation between the seismic source and the studied sites, with the mode coupling approach used to model the seismic wave propagation through the local sedimentary structure of the target site, allows to extend the modelling to higher frequencies of earthquake engineering interest. The results of these studies (synthetic time histories of the ground motion parameters, absolute and relative response spectra etc) for the last 3 Vrancea strong events (August 31,1986 M w =7.1; May 30,1990 M w = 6.9 and October 27, 2004 M w = 6.0) can complete the strong motion database used for the microzonation purposes. Implications and integration of the deterministic results into the urban planning and disaster management strategies are also discussed. (authors)
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Convergence studies of deterministic methods for LWR explicit reflector methodology
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-01-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Use of deterministic methods in survey calculations for criticality problems
Hutton, J.L.; Phenix, J.; Course, A.F.
1991-01-01
A code package using deterministic methods for solving the Boltzmann Transport equation is the WIMS suite. This has been very successful for a range of situations. In particular it has been used with great success to analyse trends in reactivity with a range of changes in state. The WIMS suite of codes have a range of methods and are very flexible in the way they can be combined. A wide variety of situations can be modelled ranging through all the current Thermal Reactor variants to storage systems and items of chemical plant. These methods have recently been enhanced by the introduction of the CACTUS method. This is based on a characteristics technique for solving the Transport equation and has the advantage that complex geometrical situations can be treated. In this paper the basis of the method is outlined and examples of its use are illustrated. In parallel with these developments the validation for out of pile situations has been extended to include experiments with relevance to criticality situations. The paper will summarise this evidence and show how these results point to a partial re-adoption of deterministic methods for some areas of criticality. The paper also presents results to illustrate the use of WIMS in criticality situations and in particular show how it can complement codes such as MONK when used for surveying the reactivity effect due to changes in geometry or materials. (Author)
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
S. GOLUOGLU, C. BENTLEY, R. DEMEGLIO, M. DUNN, K. NORTON, R. PEVEY I.SUSLOV AND H.L. DODDS
1998-01-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
Deterministic and fuzzy-based methods to evaluate community resilience
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Deterministic methods for multi-control fuel loading optimization
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis
Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)
2013-07-01
A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)
Transmission power control in WSNs : from deterministic to cognitive methods
Chincoli, M.; Liotta, A.; Gravina, R.; Palau, C.E.; Manso, M.; Liotta, A.; Fortino, G.
2018-01-01
Communications in Wireless Sensor Networks (WSNs) are affected by dynamic environments, variable signal fluctuations and interference. Thus, prompt actions are necessary to achieve dependable communications and meet Quality of Service (QoS) requirements. To this end, the deterministic algorithms
Method to deterministically study photonic nanostructures in different experimental instruments.
Husken, B H; Woldering, L A; Blum, C; Vos, W L
2009-01-01
We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.
Liu, Shichang; Wang, Guanbo; Liang, Jingang; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON & DONJON were applied in burnup calculations of plate-type research reactors. • Continuous-energy Monte Carlo burnup calculations by RMC were chosen as references. • Comparisons of keff, isotopic densities and power distribution were performed. • Reasons leading to discrepancies between two different approaches were analyzed. • DRAGON & DONJON is capable of burnup calculations with appropriate treatments. - Abstract: The burnup-dependent core neutronics analysis of the plate-type research reactors such as JRR-3M poses a challenge for traditional neutronics calculational tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity, large leakage and the particular neutron spectrum of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the burnup-dependent core neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON & DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic one. In the first stage, the homogenizations of few-group cross sections by DRAGON and the full core diffusion calculations by DONJON have been verified by comparing with the detailed Monte Carlo simulations. In the second stage, the burnup-dependent calculations of both assembly level and the full core level were carried out, to examine the capability of the deterministic code system DRAGON & DONJON to reliably simulate the burnup-dependent behavior of research reactors. The results indicate that both RMC and DRAGON & DONJON code system are capable of burnup-dependent neutronics analysis of research reactors, provided that appropriate treatments are applied in both assembly and core levels for the deterministic codes
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David
2006-01-01
We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)
Deterministic factor analysis: methods of integro-differentiation of non-integral order
Valentina V. Tarasova
2016-12-01
Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes
Deterministic methods to solve the integral transport equation in neutronic
Warin, X.
1993-11-01
We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications
Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne
2014-01-01
The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.
2D deterministic radiation transport with the discontinuous finite element method
Kershaw, D.; Harte, J.
1993-01-01
This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method
Bearing-only SLAM: comparison between probabilistic and deterministic methods
Joly , Cyril; Rives , Patrick
2008-01-01
This work deals with the problem of simultaneous localization and mapping (SLAM). Classical methods for solving the SLAM problem are based on the Extended Kalman Filter (EKF-SLAM) or particle filter (FastSLAM). These kinds of algorithms allow on-line solving but could be inconsistent. In this report, the above-mentioned algorithms are not studied but global ones. Global approaches need all measurements from the initial step to the final step in order to compute the trajectory of the robot and...
An atlas-based multimodal registration method for 2D images with discrepancy structures.
Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng
2018-06-04
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
Non-Deterministic, Non-Traditional Methods (NDNTM)
Cruse, Thomas A.; Chamis, Christos C. (Technical Monitor)
2001-01-01
The review effort identified research opportunities related to the use of nondeterministic, nontraditional methods to support aerospace design. The scope of the study was restricted to structural design rather than other areas such as control system design. Thus, the observations and conclusions are limited by that scope. The review identified a number of key results. The results include the potential for NASA/AF collaboration in the area of a design environment for advanced space access vehicles. The following key points set the context and delineate the key results. The Principal Investigator's (PI's) context for this study derived from participation as a Panel Member in the Air Force Scientific Advisory Board (AF/SAB) Summer Study Panel on 'Whither Hypersonics?' A key message from the Summer Study effort was a perceived need for a national program for a space access vehicle whose operating characteristics of cost, availability, deployability, and reliability most closely match the NASA 3rd Generation Reusable Launch Vehicle (RLV). The Panel urged the AF to make a significant joint commitment to such a program just as soon as the AF defined specific requirements for space access consistent with the AF Aerospace Vision 2020. The review brought home a concurrent need for a national vehicle design environment. Engineering design system technology is at a time point from which a revolution as significant as that brought about by the finite element method is possible, this one focusing on information integration on a scale that far surpasses current design environments. The study therefore fully supported the concept, if not some of the details of the Intelligent Synthesis Environment (ISE). It became abundantly clear during this study that the government (AF, NASA) and industry are not moving in the same direction in this regard, in fact each is moving in its own direction. NASA/ISE is not yet in an effective leadership position in this regard. However, NASA does
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Giffard, F X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Frequency domain fatigue damage estimation methods suitable for deterministic load spectra
Henderson, A.R.; Patel, M.H. [University Coll., Dept. of Mechanical Engineering, London (United Kingdom)
2000-07-01
The evaluation of fatigue damage due to load spectra, directly in the frequency domain, is a complex phenomena but with the benefit of significant computation time savings. Various formulae have been suggested but have usually relating to a specific application only. The Dirlik method is the exception and is applicable to general cases of continuous stochastic spectra. This paper describes three approaches for evaluating discrete deterministic load spectra generated by the floating wind turbine model developed the UCL/RAL research project. (Author)
Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan
2015-01-01
Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied
Theory and application of deterministic multidimensional pointwise energy lattice physics methods
Zerkle, M.L.
1999-01-01
The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem
Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives
Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle
2002-01-01
In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...... properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria....
Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.
2017-11-01
We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.
Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.
2008-01-01
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.
2009-01-01
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
A plateau–valley separation method for textured surfaces with a deterministic pattern
Godi, Alessandro; Kühle, Anders; De Chiffre, Leonardo
2014-01-01
The effective characterization of textured surfaces presenting a deterministic pattern of lubricant reservoirs is an issue with which many researchers are nowadays struggling. Existing standards are not suitable for the characterization of such surfaces, providing at times values without physical...... meaning. A new method based on the separation between the plateau and valley regions is hereby presented allowing independent functional analyses of the detected features. The determination of a proper threshold between plateaus and valleys is the first step of a procedure resulting in an efficient...
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
Norris, Edward T.; Liu, Xin; Hsieh, Jiang
2015-01-01
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer
Application of deterministic and probabilistic methods in replacement of nuclear systems
Vianna Filho, Alfredo Marques
2007-01-01
The economic equipment replacement problem is one of the oldest questions in Production Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost, etc. New equipment, however, require a higher initial investment and thus a higher opportunity cost, and impose special training of the labor force. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs but in contrast having lower financial, insurance, and opportunity costs. The weighting of all these costs can be made with the various methods presented. The aim of this paper is to discuss deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. (author)
Maerker, R.E.; Worley, B.A.
1989-01-01
Interest in research into the field of uncertainty analysis has recently been stimulated as a result of a need in high-level waste repository design assessment for uncertainty information in the form of response complementary cumulative distribution functions (CCDFs) to show compliance with regulatory requirements. The solution to this problem must obviously rely on the analysis of computer code models, which, however, employ parameters that can have large uncertainties. The motivation for the research presented in this paper is a search for a method involving a deterministic uncertainty analysis approach that could serve as an improvement over those methods that make exclusive use of statistical techniques. A deterministic uncertainty analysis (DUA) approach based on the use of first derivative information is the method studied in the present procedure. The present method has been applied to a high-level nuclear waste repository problem involving use of the codes ORIGEN2, SAS, and BRINETEMP in series, and the resulting CDF of a BRINETEMP result of interest is compared with that obtained through a completely statistical analysis
Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Optimal power flow: a bibliographic survey I. Formulations and deterministic methods
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S; Henry, Roland G
2013-01-01
sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and
Deco, Gustavo; Marti, Daniel
2007-01-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Tooth-size discrepancy: A comparison between manual and digital methods
Gabriele Dória Cabral Correia
2014-08-01
Full Text Available INTRODUCTION: Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. OBJECTIVE: This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. MATERIAL AND METHODS: To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. RESULTS: Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05, except for values found by the linear digital method which revealed a slight, non-significant statistical difference. CONCLUSIONS: Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable.
Developments based on stochastic and determinist methods for studying complex nuclear systems
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
Maheri, Alireza
2014-01-01
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
Biomedical applications of two- and three-dimensional deterministic radiation transport methods
Nigg, D.W.
1992-01-01
Multidimensional deterministic radiation transport methods are routinely used in support of the Boron Neutron Capture Therapy (BNCT) Program at the Idaho National Engineering Laboratory (INEL). Typical applications of two-dimensional discrete-ordinates methods include neutron filter design, as well as phantom dosimetry. The epithermal-neutron filter for BNCT that is currently available at the Brookhaven Medical Research Reactor (BMRR) was designed using such methods. Good agreement between calculated and measured neutron fluxes was observed for this filter. Three-dimensional discrete-ordinates calculations are used routinely for dose-distribution calculations in three-dimensional phantoms placed in the BMRR beam, as well as for treatment planning verification for live canine subjects. Again, good agreement between calculated and measured neutron fluxes and dose levels is obtained
Stephenson, C L; Harris, C A
2016-09-01
Glyphosate is a herbicide used to control broad-leaved weeds. Some uses of glyphosate in crop production can lead to residues of the active substance and related metabolites in food. This paper uses data on residue levels, processing information and consumption patterns, to assess theoretical lifetime dietary exposure to glyphosate. Initial estimates were made assuming exposure to the highest permitted residue levels in foods. These intakes were then refined using median residue levels from trials, processing information, and monitoring data to achieve a more realistic estimate of exposure. Estimates were made using deterministic and probabilistic methods. Exposures were compared to the acceptable daily intake (ADI)-the amount of a substance that can be consumed daily without an appreciable health risk. Refined deterministic intakes for all consumers were at or below 2.1% of the ADI. Variations were due to cultural differences in consumption patterns and the level of aggregation of the dietary information in calculation models, which allows refinements for processing. Probabilistic exposure estimates ranged from 0.03% to 0.90% of the ADI, depending on whether optimistic or pessimistic assumptions were made in the calculations. Additional refinements would be possible if further data on processing and from residues monitoring programmes were available. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Cacuci, D.G.
1984-07-01
This report presents a self-contained mathematical formalism for deterministic sensitivity analysis of two-phase flow systems, a detailed application to sensitivity analysis of the homogeneous equilibrium model of two-phase flow, and a representative application to sensitivity analysis of a model (simulating pump-trip-type accidents in BWRs) where a transition between single phase and two phase occurs. The rigor and generality of this sensitivity analysis formalism stem from the use of Gateaux (G-) differentials. This report highlights the major aspects of deterministic (forward and adjoint) sensitivity analysis, including derivation of the forward sensitivity equations, derivation of sensitivity expressions in terms of adjoint functions, explicit construction of the adjoint system satisfied by these adjoint functions, determination of the characteristics of this adjoint system, and demonstration that these characteristics are the same as those of the original quasilinear two-phase flow equations. This proves that whenever the original two-phase flow problem is solvable, the adjoint system is also solvable and, in principle, the same numerical methods can be used to solve both the original and adjoint equations
Matijevic, M.; Grgic, D.; Jecmenica, R.
2016-01-01
This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
A deterministic alternative to the full configuration interaction quantum Monte Carlo method
Tubman, Norm M.; Lee, Joonho; Takeshita, Tyler Y.; Head-Gordon, Martin; Whaley, K. Birgitta [University of California, Berkeley, Berkeley, California 94720 (United States)
2016-07-28
Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr{sub 2} molecule. We demonstrate for systems like Cr{sub 2} that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C{sub 2}.
Emmanouil Styvaktakis
2007-01-01
Full Text Available This paper presents the two main types of classification methods for power quality disturbances based on underlying causes: deterministic classification, giving an expert system as an example, and statistical classification, with support vector machines (a novel method as an example. An expert system is suitable when one has limited amount of data and sufficient power system expert knowledge; however, its application requires a set of threshold values. Statistical methods are suitable when large amount of data is available for training. Two important issues to guarantee the effectiveness of a classifier, data segmentation, and feature extraction are discussed. Segmentation of a sequence of data recording is preprocessing to partition the data into segments each representing a duration containing either an event or a transition between two events. Extraction of features is applied to each segment individually. Some useful features and their effectiveness are then discussed. Some experimental results are included for demonstrating the effectiveness of both systems. Finally, conclusions are given together with the discussion of some future research directions.
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
Inoue, Jun-ichi
2010-01-01
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-01-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S n ) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Liu, Yonghe; Feng, Jinming; Liu, Xiu; Zhao, Yadi
2017-12-01
Statistical downscaling (SD) is a method that acquires the local information required for hydrological impact assessment from large-scale atmospheric variables. Very few statistical and deterministic downscaling models for daily precipitation have been conducted for local sites influenced by the East Asian monsoon. In this study, SD models were constructed by selecting the best predictors and using generalized linear models (GLMs) for Feixian, a site in the Yishu River Basin and Shandong Province. By calculating and mapping Spearman rank correlation coefficients between the gridded standardized values of five large-scale variables and daily observed precipitation, different cyclonic circulation patterns were found for monsoonal precipitation in summer (June-September) and winter (November-December and January-March); the values of the gridded boxes with the highest absolute correlations for observed precipitation were selected as predictors. Data for predictors and predictands covered the period 1979-2015, and different calibration and validation periods were divided when fitting and validating the models. Meanwhile, the bootstrap method was also used to fit the GLM. All the above thorough validations indicated that the models were robust and not sensitive to different samples or different periods. Pearson's correlations between downscaled and observed precipitation (logarithmically transformed) on a daily scale reached 0.54-0.57 in summer and 0.56-0.61 in winter, and the Nash-Sutcliffe efficiency between downscaled and observed precipitation reached 0.1 in summer and 0.41 in winter. The downscaled precipitation partially reflected exact variations in winter and main trends in summer for total interannual precipitation. For the number of wet days, both winter and summer models were able to reflect interannual variations. Other comparisons were also made in this study. These results demonstrated that when downscaling, it is appropriate to combine a correlation
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)
2006-12-18
A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.
Kim, Jong Woo; Woo, Myeong Hyeon; Kim, Jae Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung
2017-01-01
In this study hybrid Monte Carlo/Deterministic method is explained for radiation transport analysis in global system. FW-CADIS methodology construct the weight window parameter and it useful at most global MC calculation. However, Due to the assumption that a particle is scored at a tally, less particles are transported to the periphery of mesh tallies. For compensation this space-dependency, we modified the module in the ADVANTG code to add the proposed method. We solved the simple test problem for comparing with result from FW-CADIS methodology, it was confirmed that a uniform statistical error was secured as intended. In the future, it will be added more practical problems. It might be useful to perform radiation transport analysis using the Hybrid Monte Carlo/Deterministic method in global transport problems.
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
Kullinger, Merit; Wesström, Jan; Kieler, Helle; Skalkidou, Alkistis
2017-01-01
Gestational age is estimated by ultrasound using fetal size as a proxy for age, although variance in early growth affects reliability. The aim of this study was to identify characteristics associated with discrepancies between last menstrual period-based (EDD-LMP) and ultrasound-based (EDD-US) estimated delivery dates. We identified all singleton births (n = 1 201 679) recorded in the Swedish Medical Birth Register in 1995-2010, to assess the association between maternal/fetal characteristics and large negative and large positive discrepancies (EDD-LMP earlier than EDD-US and 10th percentile in the discrepancy distribution vs. EDD-LMP later than EDD-US and 90th percentile). Analyses were adjusted for age, parity, height, body mass index, smoking, and employment status. Women with a body mass index >40 kg/m 2 had the highest odds for large negative discrepancies (-9 to -20 days) [odds ratio (OR) 2.16, 95% CI 2.01-2.33]. Other factors associated with large negative discrepancies were: diabetes, young maternal age, multiparity, body mass index between 30 and 39.9 kg/m 2 or +1 SD), and unemployment. Several maternal and fetal characteristics were associated with discrepancies between dating methods. Systematic associations of discrepancies with maternal height, fetal sex, and partly obesity, may reflect an influence on the precision of the ultrasound estimate due to variance in early growth. © 2016 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).
Pinto João
2011-08-01
Full Text Available Abstract Background Anopheles gambiae M and S molecular forms, the major malaria vectors in the Afro-tropical region, are ongoing a process of ecological diversification and adaptive lineage splitting, which is affecting malaria transmission and vector control strategies in West Africa. These two incipient species are defined on the basis of single nucleotide differences in the IGS and ITS regions of multicopy rDNA located on the X-chromosome. A number of PCR and PCR-RFLP approaches based on form-specific SNPs in the IGS region are used for M and S identification. Moreover, a PCR-method to detect the M-specific insertion of a short interspersed transposable element (SINE200 has recently been introduced as an alternative identification approach. However, a large-scale comparative analysis of four widely used PCR or PCR-RFLP genotyping methods for M and S identification was never carried out to evaluate whether they could be used interchangeably, as commonly assumed. Results The genotyping of more than 400 A. gambiae specimens from nine African countries, and the sequencing of the IGS-amplicon of 115 of them, highlighted discrepancies among results obtained by the different approaches due to different kinds of biases, which may result in an overestimation of MS putative hybrids, as follows: i incorrect match of M and S specific primers used in the allele specific-PCR approach; ii presence of polymorphisms in the recognition sequence of restriction enzymes used in the PCR-RFLP approaches; iii incomplete cleavage during the restriction reactions; iv presence of different copy numbers of M and S-specific IGS-arrays in single individuals in areas of secondary contact between the two forms. Conclusions The results reveal that the PCR and PCR-RFLP approaches most commonly utilized to identify A. gambiae M and S forms are not fully interchangeable as usually assumed, and highlight limits of the actual definition of the two molecular forms, which might
Jinaphanh, A.
2012-01-01
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
Terry, W.K.; Gougar, H.D.; Ougouag, A.M.
2002-01-01
A new deterministic method has been developed for the neutronics analysis of a pebble-bed reactor (PBR). The method accounts for the flow of pebbles explicitly and couples the flow to the neutronics. The method allows modeling of once-through cycles as well as cycles in which pebbles are recirculated through the core an arbitrary number of times. This new work is distinguished from older methods by the systematically semi-analytical approach it takes. In particular, whereas older methods use the finite-difference approach (or an equivalent one) for the discretization and the solution of the burnup equation, the present work integrates the relevant differential equation analytically in discrete and complementary sub-domains of the reactor. Like some of the finite-difference codes, the new method obtains the asymptotic fuel-loading pattern directly, without modeling any intermediate loading pattern. This is a significant advantage for the design and optimization of the asymptotic fuel-loading pattern. The new method is capable of modeling directly both the once-through-then-out fuel cycle and the pebble recirculating fuel cycle. Although it currently includes a finite-difference neutronics solver, the new method has been implemented into a modular code that incorporates the framework for the future coupling to an efficient solver such as a nodal method and to modern cross section preparation capabilities. In its current state, the deterministic method presented here is capable of quick and efficient design and optimization calculations for the in-core PBR fuel cycle. The method can also be used as a practical 'scoping' tool. It could, for example, be applied to determine the potential of the PBR for resisting nuclear-weapons proliferation and to optimize proliferation-resistant features. However, the purpose of this paper is to show that the method itself is viable. Refinements to the code are under way, with the objective of producing a powerful reactor physics
Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.
2018-01-01
INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...
Stephenson, C L; Harris, C A; Clarke, R
2018-02-01
Use of glyphosate in crop production can lead to residues of the active substance and related metabolites in food. Glyphosate has never been considered acutely toxic; however, in 2015 the European Food Safety Authority (EFSA) proposed an acute reference dose (ARfD). This differs from the Joint FAO/WHO Meeting on Pesticide Residues (JMPR) who in 2016, in line with their existing position, concluded that an ARfD was not necessary for glyphosate. This paper makes a comprehensive assessment of short-term dietary exposure to glyphosate from potentially treated crops grown in the EU and imported third-country food sources. European Union and global deterministic models were used to make estimates of short-term dietary exposure (generally defined as up to 24 h). Estimates were refined using food-processing information, residues monitoring data, national dietary exposure models, and basic probabilistic approaches to estimating dietary exposure. Calculated exposures levels were compared to the ARfD, considered to be the amount of a substance that can be consumed in a single meal, or 24-h period, without appreciable health risk. Acute dietary intakes were Probabilistic exposure estimates showed that the acute intake on no person-days exceeded 10% of the ARfD, even for the pessimistic scenario.
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
Ahmed Kibria
2015-01-01
Full Text Available The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a large number of labor hours are spent trying to reduce the discrepancy by manually adjusting the distribution’s parameters. This paper addresses this problem and provides a simulation-based optimization method for the minimization of the discrepancy between the simulated and the historical percentage of failures for turbine engine components. The proposed methodology optimizes the parameter values of the component’s failure statistical distributions within the component’s likelihood confidence bounds. A complete testing of the proposed method is performed on a turbine engine case study. The method can be considered as a decision-making tool for maintenance, repair, and overhaul companies and will potentially reduce the cost of labor associated to finding the appropriate value of the distribution parameters for each component/failure mode in the model and increase the accuracy in the prediction of the mean time to failures (MTTF.
Distance Between the Malleoli and the GroundA New Clinical Method to Measure Leg-Length Discrepancy.
Aguilar, Estela Gomez; Domínguez, Águeda Gómez; Peña-Algaba, Carolina; Castillo-López, José M
2017-03-01
The aim of this work is to introduce a useful method for the clinical diagnosis of leg-length inequality: distance between the malleoli and the ground (DMG). A transversal observational study was performed on 17 patients with leg-length discrepancy. Leg-length inequality was determined with different clinical methods: with a tape measure in a supine position from the anterior superior iliac spine (ASIS) to the internal and external malleoli, as the difference between the iliac crests when standing (pelvimeter), and as asymmetry between ASISs (PALpation Meter [PALM]; A&D Medical Products Healthcare, San Jose, California). The Foot Posture Index (FPI) and the navicular drop test were also used. The DMG with Perthes rule (perpendicular to the foot when standing), the distance between the internal malleolus and the ground (DIMG), and the distance between the external malleolus and the ground were designed by the authors. The DIMG is directly related to the traditional ASIS-external malleolus measurement (P = .003), the FPI (P = .010), and the navicular drop test (P DMG) is useful for diagnosing leg-length discrepancy and is related to the ASIS-external malleolus measurement. The DIMG is significantly inversely proportional to the degree of pronation according to the FPI. Conversely, determination of leg-length discrepancy with a tape measure from the ASIS to the malleoli cannot be performed interchangeably at the level of the internal or external malleolus.
Carlton Jones, A.L.; Roddie, M.E.
2016-01-01
Aim: To assess the effect on radiologist participation in learning from discrepancy meetings (LDMs) in a multisite radiology department by establishing virtual LDMs using OsiriX (Pixmeo). Materials and methods: Sets of anonymised discrepancy cases were added to an OsiriX database available for viewing on iMacs in all radiology reporting rooms. Radiologists were given a 3-week period to review the cases and send their feedback to the LDM convenor. Group learning points and consensus feedback were added to each case before it was moved to a permanent digital LDM library. Participation was recorded and compared with that from the previous 4 years of conventional LDMs. Radiologist feedback comparing the two types of LDM was collected using an anonymous online questionnaire. Results: Numbers of radiologists attending increased significantly from a mean of 12±2.9 for the conventional LDM to 32.7±7 for the virtual LDM (p<0.0001) and the percentage of radiologists achieving the UK standard of participation in at least 50% of LDMs annually rose from an average of 18% to 68%. The number of cases submitted per meeting rose significantly from an average of 11.1±3 for conventional LDMs to 15.9±5.9 for virtual LDMs (p<0.0097). Analysis of 35 returned questionnaires showed that radiologists welcomed being able to review cases at a time and place of their choosing and at their own pace. Conclusion: Introduction of virtual LDMs in a multisite radiology department improved radiologist participation in shared learning from radiological discrepancy and increased the number of submitted cases. - Highlights: • Learning from error is an important way to improve patient safety. • Consultant attendance at learning from discrepancy meetings (LDMs) was persistently poor in a large, multisite Trust. • Introduction of a ‘virtual’ LDM improved consultant participation and increased the number of cases submitted.
Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics
Marin-Lafleche, A.; Smith, M. A.; Lee, C.
2013-01-01
A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)
Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics
Marin-Lafleche, A.; Smith, M. A.; Lee, C. [Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States)
2013-07-01
A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)
Olariu, Victor; Manesso, Erica; Peterson, Carsten
2017-06-01
Depicting developmental processes as movements in free energy genetic landscapes is an illustrative tool. However, exploring such landscapes to obtain quantitative or even qualitative predictions is hampered by the lack of free energy functions corresponding to the biochemical Michaelis-Menten or Hill rate equations for the dynamics. Being armed with energy landscapes defined by a network and its interactions would open up the possibility of swiftly identifying cell states and computing optimal paths, including those of cell reprogramming, thereby avoiding exhaustive trial-and-error simulations with rate equations for different parameter sets. It turns out that sigmoidal rate equations do have approximate free energy associations. With this replacement of rate equations, we develop a deterministic method for estimating the free energy surfaces of systems of interacting genes at different noise levels or temperatures. Once such free energy landscape estimates have been established, we adapt a shortest path algorithm to determine optimal routes in the landscapes. We explore the method on three circuits for haematopoiesis and embryonic stem cell development for commitment and reprogramming scenarios and illustrate how the method can be used to determine sequential steps for onsets of external factors, essential for efficient reprogramming.
Deterministic methods for the relativistic Vlasov-Maxwell equations and the Van Allen belts dynamics
Le Bourdiec, S.
2007-03-01
Artificial satellites operate in an hostile radiation environment, the Van Allen radiation belts, which partly condition their reliability and their lifespan. In order to protect them, it is necessary to characterize the dynamics of the energetic electrons trapped in these radiation belts. This dynamics is essentially determined by the interactions between the energetic electrons and the existing electromagnetic waves. This work consisted in designing a numerical scheme to solve the equations modelling these interactions: the relativistic Vlasov-Maxwell system of equations. Our choice was directed towards methods of direct integration. We propose three new spectral methods for the momentum discretization: a Galerkin method and two collocation methods. All of them are based on scaled Hermite functions. The scaling factor is chosen in order to obtain the proper velocity resolution. We present in this thesis the discretization of the one-dimensional Vlasov-Poisson system and the numerical results obtained. Then we study the possible extensions of the methods to the complete relativistic problem. In order to reduce the computing time, parallelization and optimization of the algorithms were carried out. Finally, we present 1Dx-3Dv (mono-dimensional for x and three-dimensional for velocity) computations of Weibel and whistler instabilities with one or two electrons species. (author)
Impulse response identification with deterministic inputs using non-parametric methods
Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.
1985-01-01
This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Wagner, John C.; Peplow, Douglas E.; Mosher, Scott W.; Evans, Thomas M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Wagner, J.C.; Peplow, D.E.; Mosher, S.W.; Evans, T.M.
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications. (author)
Analysis of natural circulation BWR dynamics with stochastic and deterministic methods
VanderHagen, T.H.; Van Dam, H.; Hoogenboom, J.E.; Kleiss, E.B.J.; Nissen, W.H.M.; Oosterkamp, W.J.
1986-01-01
Reactor kinetic, thermal hydraulic and total plant stability of a natural convection cooled BWR was studied using noise analysis and by evaluation of process responses to control rod steps and to steamflow control valve steps. An estimate of the fuel thermal time constant and an impression of the recirculation flow response to power variations was obtained. A sophisticated noise analysis method resulted in more insight into the fluctuations of the coolant velocity
Ghassoun, Jillali; Jehoauni, Abdellatif
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
Qu, H; Yu, N; Stephans, K; Xia, P [Cleveland Clinic, Cleveland, OH (United States)
2014-06-01
Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed.
Qu, H; Yu, N; Stephans, K; Xia, P
2014-01-01
Purpose: To develop a normalization method to remove discrepancy in ventilation function due to different breathing patterns. Methods: Twenty five early stage non-small cell lung cancer patients were included in this study. For each patient, a ten phase 4D-CT and the voluntarily maximum inhale and exhale CTs were acquired clinically and retrospectively used for this study. For each patient, two ventilation maps were calculated from voxel-to-voxel CT density variations from two phases of the quiet breathing and two phases of the extreme breathing. For the quiet breathing, 0% (inhale) and 50% (exhale) phases from 4D-CT were used. An in-house tool was developed to calculate and display the ventilation maps. To enable normalization, the whole lung of each patient was evenly divided into three parts in the longitude direction at a coronal image with a maximum lung cross section. The ratio of cumulated ventilation from the top one-third region to the middle one-third region of the lung was calculated for each breathing pattern. Pearson's correlation coefficient was calculated on the ratios of the two breathing patterns for the group. Results: For each patient, the ventilation map from the quiet breathing was different from that of the extreme breathing. When the cumulative ventilation was normalized to the middle one-third of the lung region for each patient, the normalized ventilation functions from the two breathing patterns were consistent. For this group of patients, the correlation coefficient of the normalized ventilations for the two breathing patterns was 0.76 (p < 0.01), indicating a strong correlation in the ventilation function measured from the two breathing patterns. Conclusion: For each patient, the ventilation map is dependent of the breathing pattern. Using a regional normalization method, the discrepancy in ventilation function induced by the different breathing patterns thus different tidal volumes can be removed
Wang, Chunxiang; Watanabe, Naoki; Marui, Hideaki
2013-04-01
The hilly slopes of Mt. Medvednica are stretched in the northwestern part of Zagreb City, Croatia, and extend to approximately 180km2. In this area, landslides, e.g. Kostanjek landslide and Črešnjevec landslide, have brought damage to many houses, roads, farmlands, grassland and etc. Therefore, it is necessary to predict the potential landslides and to enhance landslide inventory for hazard mitigation and security management of local society in this area. We combined deterministic method and probabilistic method to assess potential landslides including their locations, size and sliding surfaces. Firstly, this study area is divided into several slope units that have similar topographic and geological characteristics using the hydrology analysis tool in ArcGIS. Then, a GIS-based modified three-dimensional Hovland's method for slope stability analysis system is developed to identify the sliding surface and corresponding three-dimensional safety factor for each slope unit. Each sliding surface is assumed to be the lower part of each ellipsoid. The direction of inclination of the ellipsoid is considered to be the same as the main dip direction of the slope unit. The center point of the ellipsoid is randomly set to the center point of a grid cell in the slope unit. The minimum three-dimensional safety factor and corresponding critical sliding surface are also obtained for each slope unit. Thirdly, since a single value of safety factor is insufficient to evaluate the slope stability of a slope unit, the ratio of the number of calculation cases in which the three-dimensional safety factor values less than 1.0 to the total number of trial calculation is defined as the failure probability of the slope unit. If the failure probability is more than 80%, the slope unit is distinguished as 'unstable' from other slope units and the landslide hazard can be mapped for the whole study area.
Adams, Marvin L.
2001-01-01
We discuss deterministic transport methods used today in neutronic analysis of nuclear reactors. This discussion is not exhaustive; our goal is to provide an overview of the methods that are most widely used for analyzing light water reactors (LWRs) and that (in our opinion) hold the most promise for the future. The current practice of LWR analysis involves the following steps: 1. Evaluate cross sections from measurements and models. 2. Obtain weighted-average cross sections over dozens to hundreds of energy intervals; the result is a 'fine-group' cross-section set. 3. [Optional] Modify the fine-group set: Further collapse it using information specific to your class of reactors and/or alter parameters so that computations better agree with experiments. The result is a 'many-group library'. 4. Perform pin cell transport calculations (usually one-dimensional cylindrical); use the results to collapse the many-group library to a medium-group set, and/or spatially average the cross sections over the pin cells. 5. Perform assembly-level transport calculations with the medium-group set. It is becoming common practice to use essentially exact geometry (no pin cell homogenization). It may soon become common to skip step 4 and use the many-group library. The output is a library of few-group cross sections, spatially averaged over the assembly, parameterized to cover the full range of operating conditions. 6. Perform full-core calculations with few-group diffusion theory that contains significant homogenizations and limited transport corrections. We discuss steps 4, 5, and 6 and focus mainly on step 5. One cannot review a large topic in a short summary without simplifying reality, omitting important details, and neglecting some methods that deserve attention; for this we apologize in advance. (author)
Cinicioglu Esma Nur
2014-01-01
Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.
Mavris, Dimitri N.; Schutte, Jeff S.
2016-01-01
This report documents work done by the Aerospace Systems Design Lab (ASDL) at the Georgia Institute of Technology, Daniel Guggenheim School of Aerospace Engineering for the National Aeronautics and Space Administration, Aeronautics Research Mission Directorate, Integrated System Research Program, Environmentally Responsible Aviation (ERA) Project. This report was prepared under contract NNL12AA12C, "Application of Deterministic and Probabilistic System Design Methods and Enhancement of Conceptual Design Tools for ERA Project". The research within this report addressed the Environmentally Responsible Aviation (ERA) project goal stated in the NRA solicitation "to advance vehicle concepts and technologies that can simultaneously reduce fuel burn, noise, and emissions." To identify technology and vehicle solutions that simultaneously meet these three metrics requires the use of system-level analysis with the appropriate level of fidelity to quantify feasibility, benefits and degradations, and associated risk. In order to perform the system level analysis, the Environmental Design Space (EDS) [Kirby 2008, Schutte 2012a] environment developed by ASDL was used to model both conventional and unconventional configurations as well as to assess technologies from the ERA and N+2 timeframe portfolios. A well-established system design approach was used to perform aircraft conceptual design studies, including technology trade studies to identify technology portfolios capable of accomplishing the ERA project goal and to obtain accurate tradeoffs between performance, noise, and emissions. The ERA goal, shown in Figure 1, is to simultaneously achieve the N+2 benefits of a cumulative noise margin of 42 EPNdB relative to stage 4, a 75 percent reduction in LTO NOx emissions relative to CAEP 6 and a 50 percent reduction in fuel burn relative to the 2005 best in class aircraft. There were 5 research task associated with this research: 1) identify technology collectors, 2) model
Růžička, V.; Malíková, Lucie; Seitl, Stanislav
2017-01-01
Roč. 11, č. 42 (2017), s. 128-135 ISSN 1971-8993 R&D Projects: GA ČR GA17-01589S Institutional support: RVO:68081723 Keywords : Over-deterministic * Fracture mechanics * Rounding numbers * Stress field * Williams’ expansion Subject RIV: JL - Materials Fatigue, Friction Mechanics OBOR OECD: Audio engineering, reliability analysis
Deterministic Graphical Games Revisited
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
2004-09-01
The efficient feedback of operating experience (OE) is a valuable source of information for improving the safety and reliability of nuclear power plants (NPPs). It is therefore essential to collect information on abnormal events from both internal and external sources. Internal operating experience is analysed to obtain a complete understanding of an event and of its safety implications. Corrective or improvement measures may then be developed, prioritized and implemented in the plant if considered appropriate. Information from external events may also be analysed in order to learn lessons from others' experience and prevent similar occurrences at our own plant. The traditional ways of investigating operational events have been predominantly qualitative. In recent years, a PSA-based method called probabilistic precursor event analysis has been developed, used and applied on a significant scale in many places for a number of plants. The method enables a quantitative estimation of the safety significance of operational events to be incorporated. The purpose of this report is to outline a synergistic process that makes more effective use of operating experience event information by combining the insights and knowledge gained from both approaches, traditional deterministic event investigation and PSA-based event analysis. The PSA-based view on operational events and PSA-based event analysis can support the process of operational event analysis at the following stages of the operational event investigation: (1) Initial screening stage. (It introduces an element of quantitative analysis into the selection process. Quantitative analysis of the safety significance of nuclear plant events can be a very useful measure when it comes to selecting internal and external operating experience information for its relevance.) (2) In-depth analysis. (PSA based event evaluation provides a quantitative measure for judging the significance of operational events, contributors to
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Deterministic Graphical Games Revisited
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
A panorama of discrepancy theory
Srivastav, Anand; Travaglini, Giancarlo
2014-01-01
Discrepancy theory concerns the problem of replacing a continuous object with a discrete sampling. Discrepancy theory is currently at a crossroads between number theory, combinatorics, Fourier analysis, algorithms and complexity, probability theory and numerical analysis. There are several excellent books on discrepancy theory but perhaps no one of them actually shows the present variety of points of view and applications covering the areas "Classical and Geometric Discrepancy Theory", "Combinatorial Discrepancy Theory" and "Applications and Constructions". Our book consists of several chapters, written by experts in the specific areas, and focused on the different aspects of the theory. The book should also be an invitation to researchers and students to find a quick way into the different methods and to motivate interdisciplinary research.
Li, M
1998-08-01
In this thesis, two methods for solving the multigroup Boltzmann equation have been studied: the interface-current method and the Monte Carlo method. A new version of interface-current (IC) method has been develop in the TDT code at SERMA, where the currents of interface are represented by piecewise constant functions in the solid angle space. The convergence of this method to the collision probability (CP) method has been tested. Since the tracking technique is used for both the IC and CP methods, it is necessary to normalize he collision probabilities obtained by this technique. Several methods for this object have been studied and implemented in our code, we have compared their performances and chosen the best one as the standard choice. The transfer matrix treatment has been a long-standing difficulty for the multigroup Monte Carlo method: when the cross-sections are converted into multigroup form, important negative parts will appear in the angular transfer laws represented by low-order Legendre polynomials. Several methods based on the preservation of the first moments, such as the discrete angles methods and the equally-probable step function method, have been studied and implemented in the TRIMARAN-II code. Since none of these codes has been satisfactory, a new method, the non equally-probably step function method, has been proposed and realized in our code. The comparisons for these methods have been done in several aspects: the preservation of the moments required, the calculation of a criticality problem and the calculation of a neutron-transfer in water problem. The results have showed that the new method is the best one in all these comparisons, and we have proposed that it should be a standard choice for the multigroup transfer matrix. (author) 76 refs.
Deterministic uncertainty analysis
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Ahmed Kibria; Krystel K. Castillo-Villar; Harry Millwater
2015-01-01
The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a l...
Local deterministic theory surviving the violation of Bell's inequalities
Cormier-Delanoue, C.
1984-01-01
Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr
Cai, Li
2014-01-01
In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3 for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4). At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4 code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation. Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries. Finally, a B1 leakage model is implemented in the TRIPOLI-4 code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPOLI-4 code allows producing multi-group constants which can then be used in the core
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
Artioli, Carlo; Sarotto, Massimo; Grasso, Giacomo; Krepel, Jiri
2009-01-01
neutronic analysis, adopting both deterministic and stochastic approaches, has been carried out. It becomes crucial indeed to estimate accurately the self-shielding phenomenon of the innovative FARs in order to achieve the aimed performances (a reactivity worth of about 3000 pcm for scram). (author)
Risk-based and deterministic regulation
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Maria Inês Moura Freixo
2004-02-01
Full Text Available Mycobacterium tuberculosis strains resistant to streptomycin (SM, isoniazid (INH, and/or rifampin (RIF as determined by the conventional Löwenstein-Jensen proportion method (LJPM were compared with the E test, a minimum inhibitory concentration susceptibility method. Discrepant isolates were further evaluated by BACTEC and by DNA sequence analyses for mutations in genes most often associated with resistance to these drugs (rpsL, katG, inhA, and rpoB. Preliminary discordant E test results were seen in 75% of isolates resistant to SM and in 11% to INH. Discordance improved for these two drugs (63% for SM and none for INH when isolates were re-tested but worsened for RIF (30%. Despite good agreement between phenotypic results and sequencing analyses, wild type profiles were detected on resistant strains mainly for SM and INH. It should be aware that susceptible isolates according to molecular methods might contain other mechanisms of resistance. Although reproducibility of the LJPM susceptibility method has been established, variable E test results for some M. tuberculosis isolates poses questions regarding its reproducibility particularly the impact of E test performance which may vary among laboratories despite adherence to recommended protocols. Further studies must be done to enlarge the evaluated samples and looked possible mutations outside of the hot spot sequenced gene among discrepant strains.
Le Bourdiec, S
2007-03-15
Artificial satellites operate in an hostile radiation environment, the Van Allen radiation belts, which partly condition their reliability and their lifespan. In order to protect them, it is necessary to characterize the dynamics of the energetic electrons trapped in these radiation belts. This dynamics is essentially determined by the interactions between the energetic electrons and the existing electromagnetic waves. This work consisted in designing a numerical scheme to solve the equations modelling these interactions: the relativistic Vlasov-Maxwell system of equations. Our choice was directed towards methods of direct integration. We propose three new spectral methods for the momentum discretization: a Galerkin method and two collocation methods. All of them are based on scaled Hermite functions. The scaling factor is chosen in order to obtain the proper velocity resolution. We present in this thesis the discretization of the one-dimensional Vlasov-Poisson system and the numerical results obtained. Then we study the possible extensions of the methods to the complete relativistic problem. In order to reduce the computing time, parallelization and optimization of the algorithms were carried out. Finally, we present 1Dx-3Dv (mono-dimensional for x and three-dimensional for velocity) computations of Weibel and whistler instabilities with one or two electrons species. (author)
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Deterministic uncertainty analysis
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Deterministic behavioural models for concurrency
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
A deterministic width function model
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
S. Mariani
2005-01-01
Full Text Available In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as 'Montserrat-2000' event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs, several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard 'eyeball' analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Tonkin-Crine, Sarah; Anthierens, Sibyl; Hood, Kerenza; Yardley, Lucy; Cals, Jochen W L; Francis, Nick A; Coenen, Samuel; van der Velden, Alike W; Godycki-Cwirko, Maciek; Llor, Carl; Butler, Chris C; Verheij, Theo J M; Goossens, Herman; Little, Paul
2016-05-12
Mixed methods are commonly used in health services research; however, data are not often integrated to explore complementarity of findings. A triangulation protocol is one approach to integrating such data. A retrospective triangulation protocol was carried out on mixed methods data collected as part of a process evaluation of a trial. The multi-country randomised controlled trial found that a web-based training in communication skills (including use of a patient booklet) and the use of a C-reactive protein (CRP) point-of-care test decreased antibiotic prescribing by general practitioners (GPs) for acute cough. The process evaluation investigated GPs' and patients' experiences of taking part in the trial. Three analysts independently compared findings across four data sets: qualitative data collected view semi-structured interviews with (1) 62 patients and (2) 66 GPs and quantitative data collected via questionnaires with (3) 2886 patients and (4) 346 GPs. Pairwise comparisons were made between data sets and were categorised as agreement, partial agreement, dissonance or silence. Three instances of dissonance occurred in 39 independent findings. GPs and patients reported different views on the use of a CRP test. GPs felt that the test was useful in convincing patients to accept a no-antibiotic decision, but patient data suggested that this was unnecessary if a full explanation was given. Whilst qualitative data indicated all patients were generally satisfied with their consultation, quantitative data indicated highest levels of satisfaction for those receiving a detailed explanation from their GP with a booklet giving advice on self-care. Both qualitative and quantitative data sets indicated higher patient enablement for those in the communication groups who had received a booklet. Use of CRP tests does not appear to engage patients or influence illness perceptions and its effect is more centred on changing clinician behaviour. Communication skills and the patient
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Hussein, H.M.; Sakr, A.M.; Amin, E.H.
2011-01-01
The objective of this paper is to assess the suitability and the accuracy of the deterministic diffusion method for the neutronic calculations of the TRIGA type research reactors in proposed condensed energy spectra of five and seven groups with one and three thermal groups respectively, using the calculational line: WIMSD-IAEA-69 nuclear data library/ WIMSD-5B lattice and cell calculations code/ CITVAP v3.1 core calculations code. Firstly, The assessment goes through analyzing the integral parameters - k e ff, ρ 238 , σ 235 , σ 238 , and C * - of the TRX and BAPL benchmark lattices and comparison with experimental and previous reference results using other ENDLs at the full energy spectra, which show good agreement with the references at both spectra. Secondly, evaluation of the 3D nuclear characteristics of three different cores of the TRR-1/M1 TRIGA Mark- III Thai research reactor, using the CITVAP v3.1 code and macroscopic cross-section libraries generated using the WIMSD-5B code at the proposed energy spectra separately. The results include the excess reactivities and the worth of control rods, which were compared with previous Monte Carlo results and experimental values, that show good agreement with the references at both energy spectra, albeit better accuracies are shown with the five groups spectrum. The results also includes neutron flux distributions which are settled for future comparisons with other calculational techniques, even, they are comparable to reactors and fuels of the same type. The study reflects the adequacy of using the pre-stated calculational line at the condensed energy spectra for evaluation of the neutronic parameters of the TRIGA type reactors, and future comparisons of the un-benchmarked results could assure this result for wider range of neutronics or safety-related parameters
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
A Numerical Simulation for a Deterministic Compartmental ...
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Misfeldt, I.
1980-01-01
A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)
Height-Deterministic Pushdown Automata
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Deterministic computation of functional integrals
Lobanov, Yu.Yu.
1995-09-01
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Deterministic automata for extended regular expressions
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Deterministic quantitative risk assessment development
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Deterministic analyses of severe accident issues
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Takaiwa, A; Kuwayama, N; Akioka, N; Kashiwazaki, D; Kuroda, S
2018-02-01
The present study was conducted to accurately determine the presence of mild cognitive impairment, which is often difficult to evaluate using only simple tests. Our approach focused on discrepancy analysis of fluid intelligence relative to crystallized intelligence using internationally recognized neuropsychological tests. One-hundred and five patients diagnosed with asymptomatic carotid artery stenosis were assessed. The neuropsychological tests included the two subtests (information and picture completion) of Wechsler Adult Intelligence Scale-Revised (WAIS-R-two-subtests): crystallized intelligence tests and the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) (immediate memory, visuospatial/constructional, language, attention, delayed memory and total score) as fluid intelligence tests. Discrepancy analysis was used to assess cognitive impairment. The score for RBANS was subtracted from the score for WAIS-R-two-subtests, and if the score difference was greater than the 5% confidence limit for statistical significance, it was defined as a decline in cognitive function. The WAIS-R-two-subsets was within normal limits when compared with the standardized values. However, all RBANS domains showed significant declines. Frequencies of decline in each RBANS domain were as follows: 69 patients (66%) in immediate memory, 26 (25%) in visuospatial/constructional, 54 (51%) in language, 63 (60%) in attention, 54 (51%) in delayed memory and 78 (74%) in the total score. Moreover, 99 patients (94%) showed decline in at least one RBANS domain. Cognitive function is only preserved in a few patients with asymptomatic carotid artery stenosis. Mild cognitive impairment can be precisely detected by performing the discrepancy analysis between crystallized and fluid intelligence tests. © 2017 EAN.
Deterministic indexing for packed strings
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Hansen, Lisbet Sneftrup; Borup, Morten; Moller, Arne
2014-01-01
drainage models and reduce a number of unavoidable discrepancies between the model and reality. The latter can be achieved partly by inserting measured water levels from the sewer system into the model. This article describes how deterministic updating of model states in this manner affects a simulation...
Deterministic and probabilistic approach to safety analysis
Heuser, F.W.
1980-01-01
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Nonlinear Markov processes: Deterministic case
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
The dialectical thinking about deterministic and probabilistic safety analysis
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
235Cf anti ν discrepancy and the sulfur discrepancy
Smith, J.R.
1979-01-01
The cantankerous discrepancy among measured values of anti ν for 235 Cf appears at last to be nearing a final resolution. A recent review has summarized the progress that has been achieved through revaluation upward by 0.5% of two manganese bath values anti ν and the performance of a new liquid scintillator measurement. A new manganese bath measurement at INEL is in reasonably good agreement with previous manganese bath values of 235 Cf anti ν. It now appears that the manganese bath values could still be systematically low by as much as 0.4% because the BNL-325 thermal absorption cross section for sulfur may be as much as 10% low. There is a bona fide discrepancy between measurements of the sulfur cross section by pile oscillators and the values derived from transmission measurements. The resolution of this discrepancy is a prerequisite to the final resolution of the 235 Cf anti ν discrepancy. 22 references
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
ICRP (1991) and deterministic effects
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Deterministic prediction of surface wind speed variations
G. V. Drisya
2014-11-01
Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic chaos in entangled eigenstates
Schlegel, K. G.; Förster, S.
2008-05-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)
2008-05-12
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.
Deterministic chaos in entangled eigenstates
Schlegel, K.G.; Foerster, S.
2008-01-01
We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator
Learning to Act: Qualitative Learning of Deterministic Action Models
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
Simulation of photonic waveguides with deterministic aperiodic nanostructures for biosensing
Neustock, Lars Thorben; Paulsen, Moritz; Jahns, Sabrina
2016-01-01
Photonic waveguides with deterministic aperiodic corrugations offer rich spectral characteristics under surface-normal illumination. The finite-element method (FEM), the finite-difference time-domain (FDTD) method and a rigorous coupled wave algorithm (RCWA) are compared for computing the near...
Safety margins in deterministic safety analysis
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
Streamflow disaggregation: a nonlinear deterministic approach
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Discrepancy in abo blood grouping
Khan, M.N.; Ahmed, Z.; Khan, T.A.
2013-01-01
Discrepancies in blood typing is one of the major reasons in eliciting a transfusion reaction. These discrepancies can be avoided through detailed analysis for the blood typing. Here, we report a subgroup of blood group type-B in the ABO system. Donor's blood was analyzed by employing commercial antisera for blood grouping. The results of forward (known antisera) and reverse (known antigen) reaction were not complimentary. A detailed analysis using the standard protocols by American Association of Blood Banking revealed the blood type as a variant of blood group-B instead of blood group-O. This is suggestive of the fact that blood group typing should be performed with extreme care and any divergence, if identified, should be properly resolved to avoid transfusion reactions. Moreover, a major study to determine the blood group variants in Pakistani population is needed. (author)
Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli
2016-01-01
Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
A Deterministic Annealing Approach to Clustering AIRS Data
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Inferring hierarchical clustering structures by deterministic annealing
Hofmann, T.; Buhmann, J.M.
1996-01-01
The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees
Deterministic Chaos in Radon Time Variation
Planinic, J.; Vukovic, B.; Radolic, V.; Faj, Z.; Stanic, D.
2003-01-01
Radon concentrations were continuously measured outdoors, in living room and basement in 10-minute intervals for a month. The radon time series were analyzed by comparing algorithms to extract phase-space dynamical information. The application of fractal methods enabled to explore the chaotic nature of radon in the atmosphere. The computed fractal dimensions, such as Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non random changes) of the time series, but the positive values of the λ pointed out the grate sensitivity on initial conditions and appearing deterministic chaos by radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere. (author)
Radon time variations and deterministic chaos
Planinic, J. E-mail: planinic@pedos.hr; Vukovic, B.; Radolic, V
2004-07-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent ({lambda}) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0
Radon time variations and deterministic chaos
Planinic, J.; Vukovic, B.; Radolic, V.
2004-01-01
Radon concentrations were continuously measured outdoors, in the living room and in the basement at 10 min intervals for a month. Radon time series were analyzed by comparing algorithms to extract phase space dynamical information. The application of fractal methods enabled exploration of the chaotic nature of radon in atmosphere. The computed fractal dimensions, such as the Hurst exponent (H) from the rescaled range analysis, Lyapunov exponent (λ) and attractor dimension, provided estimates of the degree of chaotic behavior. The obtained low values of the Hurst exponent (0< H<0.5) indicated anti-persistent behavior (non-random changes) of the time series, but the positive values of λ pointed out the grate sensitivity on initial conditions and the deterministic chaos that appeared due to radon time variations. The calculated fractal dimensions of attractors indicated more influencing (meteorological) parameters on radon in the atmosphere
Deterministic SLIR model for tuberculosis disease mapping
Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat
2017-11-01
Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.
Freiberg Florentina
2010-02-01
Full Text Available Abstract Background Myofascial pain is a common dysfunction with a lifetime prevalence affecting up to 85% of the general population. Current guidelines for the management of myofascial pain are not available. In this study we investigated how physicians on the basis of prescription behaviour evaluate the effectiveness of treatment options in their management of myofascial pain. Methods We conducted a cross-sectional, nationwide survey with a standardized questionnaire among 332 physicians (79.8% male, 25.6% female, 47.5 ± 9.6 years experienced in treating patients with myofascial pain. Recruitment of physicians took place at three German meetings of pain therapists, rheumatologists and orthopaedists, respectively. Physicians estimated the prevalence of myofascial pain amongst patients in their practices, stated what treatments they used routinely and then rated the perceived treatment effectiveness on a six-point scale (with 1 being excellent. Data are expressed as mean ± standard deviation. Results The estimated overall prevalence of active myofascial trigger points is 46.1 ± 27.4%. Frequently prescribed treatments are analgesics, mainly metamizol/paracetamol (91.6%, non-steroidal anti-inflammatory drugs/coxibs (87.0% or weak opioids (81.8%, and physical therapies, mainly manual therapy (81.1%, TENS (72.9% or acupuncture (60.2%. Overall effectiveness ratings for analgesics (2.9 ± 0.7 and physical therapies were moderate (2.5 ± 0.8. Effectiveness ratings of the various treatment options between specialities were widely variant. 54.3% of all physicians characterized the available treatment options as insufficient. Conclusions Myofascial pain was estimated a prevalent condition. Despite a variety of commonly prescribed treatments, the moderate effectiveness ratings and the frequent characterizations of the available treatments as insufficient suggest an urgent need for clinical research to establish evidence-based guidelines for the
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J.J.; Pope, M.A.; Ferrer, R.M.; Ougouag, A.M.
2010-01-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL's current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green's Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2-3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control
Effect of EHR User Interface Changes on Internal Prescription Discrepancies
Sawarkar, A.; Dementieva, Y.A.; Breydo, E.; Ramelson, H.
2014-01-01
Summary Objective To determine whether specific design interventions (changes in the user interface (UI)) of an electronic health record (EHR) medication module are associated with an increase or decrease in the incidence of contradictions between the structured and narrative components of electronic prescriptions (internal prescription discrepancies). Materials and Methods We performed a retrospective analysis of 960,000 randomly selected electronic prescriptions generated in a single EHR between 01/2004 and 12/2011. Internal prescription discrepancies were identified using a validated natural language processing tool with recall of 76% and precision of 84%. A multivariable autoregressive integrated moving average (ARIMA) model was used to evaluate the effect of five UI changes in the EHR medication module on incidence of internal prescription discrepancies. Results Over the study period 175,725 (18.4%) prescriptions were found to have internal discrepancies. The highest rate of prescription discrepancies was observed in March 2006 (22.5%) and the lowest in March 2009 (15.0%). Addition of „as directed“ option to the dropdown decreased prescription discrepancies by 195 / month (p = 0.0004). An non-interruptive alert that reminded providers to ensure that structured and narrative components did not contradict each other decreased prescription discrepancies by 145 / month (p = 0.03). Addition of a „Renew / Sign“ button to the Medication module (a negative control) did not have an effect in prescription discrepancies. Conclusions Several UI changes in the electronic medication module were effective in reducing the incidence of internal prescription discrepancies. Further research is needed to identify interventions that can completely eliminate this type of prescription error and their effects on patient outcomes. PMID:25298811
Daciuk, J; Champarnaud, JM; Maurel, D
2003-01-01
This paper compares various methods for constructing minimal, deterministic, acyclic, finite-state automata (recognizers) from sets of words. Incremental, semi-incremental, and non-incremental methods have been implemented and evaluated.
Commentary Discrepancy between statistical analysis method and ...
is against the Consolidated Standards of Reporting Trials. (CONSORT) ... more than satisfied with the non-financial reward of being included in the ... Studies in Epidemiology ( STROBE ) Statement : Guidelines for reporting observational ...
Deterministic and unambiguous dense coding
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
The State of Deterministic Thinking among Mothers of Autistic Children
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
The resolution of discrepancies among nuclear data
Peelle, R.
1992-01-01
Significant differences among input data occur in the evaluation of nuclear data because it is difficult to achieve experimental results with the accuracy required for some applications. Types of ''discrepancies'' are classified. The means are reviewed by which an evaluator may treat discrepancies in the process of evaluation. When all means fail that are based on how the discrepant data were obtained, the perplexed evaluator must sometimes combine discrepant data based just on the stated values and uncertainties; techniques for treating such challenges are compared. Some well-known data discrepancies are examined as examples
Deterministic models for energy-loss straggling
Prinja, A.K.; Gleicher, F.; Dunham, G.; Morel, J.E.
1999-01-01
Inelastic ion interactions with target electrons are dominated by extremely small energy transfers that are difficult to resolve numerically. The continuous-slowing-down (CSD) approximation is then commonly employed, which, however, only preserves the mean energy loss per collision through the stopping power, S(E) = ∫ 0 ∞ dEprime (E minus Eprime) σ s (E → Eprime). To accommodate energy loss straggling, a Gaussian distribution with the correct mean-squared energy loss (akin to a Fokker-Planck approximation in energy) is commonly used in continuous-energy Monte Carlo codes. Although this model has the unphysical feature that ions can be upscattered, it nevertheless yields accurate results. A multigroup model for energy loss straggling was recently presented for use in multigroup Monte Carlo codes or in deterministic codes that use multigroup data. The method has the advantage that the mean and mean-squared energy loss are preserved without unphysical upscatter and hence is computationally efficient. Results for energy spectra compared extremely well with Gaussian distributions under the idealized conditions for which the Gaussian may be considered to be exact. Here, the authors present more consistent comparisons by extending the method to accommodate upscatter and, further, compare both methods with exact solutions obtained from an analog Monte Carlo simulation, for a straight-ahead transport problem
Deterministic Approach to Detect Heart Sound Irregularities
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
Deterministic hazard quotients (HQs): Heading down the wrong road
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Deterministic chaos in the processor load
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Discrepancies in assessing undergraduates’ pragmatics learning
Oscar Ndayizeye
2017-12-01
Full Text Available The purpose of this research was to reveal the level of implementation of authentic assessment in the pragmatics course at the English Education Department of a university. Discrepancy Evaluation Model (DEM was used. The instruments were questionnaire, documentation, and observation. The result of the research shows that respectively, the effectiveness of definition, installation, process, and production stages in logits are -0.06, -0.14, 0.45, and 0.02 on its aspect of the assessment methods’ effectiveness in uncovering students’ ability. Such values indicate that the level of implementation fell respectively into ‘very high’,’high’, ‘low’, and ‘very low’ categories. The students’ success rate is in ‘very high’ category with the average score of 3.22. However, the overall implementation of the authentic assessment fell into a ‘low’ category with the average score of 0.06. Discrepancies leading to such a low implementation are the unavailability of the assessment scheme, that of scoring rubric, minimal (only 54.54% diversification of assessment methods, infrequency of the lecturer’s feedback on the students’ academic achievement, and the non-use of portfolio assessment.
Progress in nuclear well logging modeling using deterministic transport codes
Kodeli, I.; Aldama, D.L.; Maucec, M.; Trkov, A.
2002-01-01
Further studies in continuation of the work presented in 2001 in Portoroz were performed in order to study and improve the performances, precission and domain of application of the deterministic transport codes with respect to the oil well logging analysis. These codes are in particular expected to complement the Monte Carlo solutions, since they can provide a detailed particle flux distribution in the whole geometry in a very reasonable CPU time. Real-time calculation can be envisaged. The performances of deterministic transport methods were compared to those of the Monte Carlo method. IRTMBA generic benchmark was analysed using the codes MCNP-4C and DORT/TORT. Centric as well as excentric casings were considered using 14 MeV point neutron source and NaI scintillation detectors. Neutron and gamma spectra were compared at two detector positions.(author)
Distinguishing deterministic and noise components in ELM time series
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Evaluation of Deterministic and Stochastic Components of Traffic Counts
Ivan Bošnjak
2012-10-01
Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
A Deterministic Approach to Earthquake Prediction
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Analysis of pinching in deterministic particle separation
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Design of deterministic interleaver for turbo codes
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Imaging with Kantorovich--Rubinstein Discrepancy
Lellmann, Jan
2014-01-01
© 2014 Society for Industrial and Applied Mathematics. We propose the use of the Kantorovich-Rubinstein norm from optimal transport in imaging problems. In particular, we discuss a variational regularization model endowed with a Kantorovich- Rubinstein discrepancy term and total variation regularization in the context of image denoising and cartoon-texture decomposition. We point out connections of this approach to several other recently proposed methods such as total generalized variation and norms capturing oscillating patterns. We also show that the respective optimization problem can be turned into a convex-concave saddle point problem with simple constraints and hence can be solved by standard tools. Numerical examples exhibit interesting features and favorable performance for denoising and cartoon-texture decomposition.
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Proving Non-Deterministic Computations in Agda
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Seyed Jalal Younesi
2015-06-01
Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.
Introducing Synchronisation in Deterministic Network Models
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Optimal Deterministic Investment Strategies for Insurers
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Self-Esteem Discrepancies and Depression.
Schafer, Robert B.; Keith, Patricia M.
1981-01-01
Examined the relationship between self-esteem discrepancies and depression in a long-term intimate relationship. Findings supported the hypothesis that depression is associated with discrepancies between married partners' self-appraisals, perceptions of spouse's appraisal, and spouse's actual appraisal. (Author/DB)
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p cranial nerves. Probabilistic tracking with a gradual
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
ZERODUR: deterministic approach for strength design
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two
An attempt to explain the uranium 238 resonance integral discrepancy
Tellier, H.; Grandotto, M.
1978-01-01
Studies on uranium 238 resonance integral discrepancy were carried out for light water reactor physics. It was shown that using recently published resonance parameters and substituting a multilevel formalism to the usual Breit and Wigner formula reduced the well known discrepancy between two values of the uranium 238 effective resonance integral: the value calculated with the nuclear data and the one deduced from critical experiments. Since the cross section computed with these assumptions agrees quite well with the Oak-Ridge transmission data, it was used to obtain the self-shielding effect and the capture rate in light water lattices. The multiplication factor calculated with this method is found very close to the experimental value. Preliminary results for a set of benchmarks relative to several types of thermal neutron reactors lead to very low discrepancies. The reactivity loss is only 130 x 10 -5 instead of 650 x 10 -5 in the case of the usual libraries and the single level formula
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
Using MCBEND for neutron or gamma-ray deterministic calculations
Geoff Dobson
2017-01-01
Full Text Available MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler’s ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
Using MCBEND for neutron or gamma-ray deterministic calculations
Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith
2017-09-01
MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Deterministic nanoparticle assemblies: from substrate to solution
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Deterministic dynamics of plasma focus discharges
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Understanding deterministic diffusion by correlated random walks
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Deterministic geologic processes and stochastic modeling
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Deterministic nonlinear phase gates induced by a single qubit
Park, Kimin; Marek, Petr; Filip, Radim
2018-05-01
We propose deterministic realizations of nonlinear phase gates by repeating a finite sequence of non-commuting Rabi interactions between a harmonic oscillator and only a single two-level ancillary qubit. We show explicitly that the key nonclassical features of the ideal cubic phase gate and the quartic phase gate are generated in the harmonic oscillator faithfully by our method. We numerically analyzed the performance of our scheme under realistic imperfections of the oscillator and the two-level system. The methodology is extended further to higher-order nonlinear phase gates. This theoretical proposal completes the set of operations required for continuous-variable quantum computation.
The deterministic optical alignment of the HERMES spectrograph
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
UPC Scaling-up methodology for Deterministic Safety Assessment and Support to Plant Operation
Martínez-Quiroga, V.; Reventós, F.; Batet, Il.
2015-07-01
Best Estimate codes along with necessary nodalizations are widely used tools in nuclear engineering for both Deterministic Safety Assessment (DSA) and Support to Plant Operation and Control. In this framework, the application of quality assurance procedures in both codes and nodalizations becomes an essential step prior any significant study. Along these lines the present paper introduces the UPC SCUP, a systematic methodology based on the extrapolation of the Integral Test Facilities (ITF) post-test simulations by means of scaling analyses. In that sense, SCUP fulfills a gap in current nodalization qualification procedures, the related with the validation of NPP nodalizations for Design Basis Accidents conditions. Three are the pillars that support SCUP: judicial selection of the experimental transients, full confidence in the quality of the ITF simulations, and simplicity in justifying discrepancies that appear between ITF and NPP counterpart transients. The techniques that are presented include the socalled Kv scaled calculations as well as the use of two new approaches, ”Hybrid nodalizations” and ”Scaled-up nodalizations”. These last two methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The study of both LSTF and PKL counterpart tests have allowed to qualify the methodology by the comparison with experimental data. Post-test simulations at different sizes allowed to define which phenomena could be well reproduced by system codes and which not, in this way also establishing the basis for the extrapolation to an NPP scaled calculation. Furthermore, the application of the UPC SCUP methodology demonstrated that selected phenomena can be scaled-up and explained between counterpart simulations by carefully considering the differences in scale and design. (Author)
Discrepancies between cognition and decision making in older adults
Boyle, Patricia A.; James, Bryan D.; Yu, Lei; Barnes, Lisa L.; Bennett, David A.
2015-01-01
Background and aims There is increasing clinical and legal interest in discrepancies between decision-making ability and cognition in old age, a stage of life when decisions have major ramifications. We investigated the frequency and correlates of such discrepancies in non-demented older adults participating in a large community-based cohort study of aging, the Rush Memory and Aging Project. Methods Participants [n = 689, mean age 81.8 (SD 7.6), mean education 15.2 (SD 3.1), 76.8 % female and 93.3 % white] completed a measure of financial and healthcare decision making (DM) and a battery of 19 neuropsychological tests from which a composite measure of global cognition (COG) was derived. Results Results indicated that 23.9 % of the sample showed a significant discrepancy between DM and COG abilities. Of these, 12.9 % showed DM COG. Logistic regression models showed older age, being non-white, greater temporal discounting, and greater risk aversion were associated with higher odds of being in the DM COG group. Education, income, depressive symptoms, and impulsivity were not associated with a discrepancy. Only demographic associations (age, sex, and race) remained significant in a fully adjusted model with terms included for all factors. Conclusion These results support the consideration of decision making and cognition as potentially separate constructs. PMID:25995167
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Deterministic sensitivity analysis for the numerical simulation of contaminants transport
Marchand, E.
2007-12-01
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Deterministic Earthquake Hazard Assessment by Public Agencies in California
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Diffusion in Deterministic Interacting Lattice Systems
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Discrepancy between snack choice intentions and behavior
Weijzen, P.L.G.; Graaf, de C.; Dijksterhuis, G.B.
2008-01-01
Objective To investigate dietary constructs that affect the discrepancy between intentioned and actual snack choice. Design Participants indicated their intentioned snack choice from a set of 4 snacks (2 healthful, 2 unhealthful). One week later, they actually chose a snack from the same set. Within
Discrepancy between Snack Choice Intentions and Behavior
Weijzen, Pascalle L. G.; de Graaf, Cees; Dijksterhuis, Garmt B.
2008-01-01
Objective: To investigate dietary constructs that affect the discrepancy between intentioned and actual snack choice. Design: Participants indicated their intentioned snack choice from a set of 4 snacks (2 healthful, 2 unhealthful). One week later, they actually chose a snack from the same set. Within 1 week after the actual choice, they completed…
Consequences of discrepancies on verified material balances
Jaech, J.L.; Hough, C.G.
1983-01-01
There exists a gap between the way item discrepancies that are found in an IAEA inspection are treated in practice and how they are treated in the IAEA Safeguards Technical Manual, Part F, Statistics. In the latter case, the existence of even a single item discrepancy is cause for rejection of the facility data. Probabilities of detection for given inspection plans are calculated based on this premise. In fact, although the existence of discrepancies may be so noted in inspection reports, they in no sense of the word lead to rejection of the facility data, i.e., to ''detection''. Clearly, however, discrepancies have an effect on the integrity of the material balance, and in fact, this effect may well be of dominant importance when compared to that of small measurement biases. This paper provides a quantitative evaluation of the effect of item discrepancies on the facility MUF. The G-circumflex statistic is introduced. It is analogous to the familiar D-circumflex statistic used to quantify the effects of small biases. Thus, just as (MUF-D-circumflex) is the facility MUF adjusted for the inspector's variables measurements, so is (MUF-D-circumflex-G-circumflex) the MUF adjusted for both the variables and attributes measurements, where it is the attributes inspection that detects item discrepancies. The distribution of (MUF-D-circumflex-G-circumflex) is approximated by a Pearson's distribution after finding the first four moments. Both the number of discrepancies and their size and sign distribution are treated as random variables. Assuming, then, that ''detection'' occurs when (MUF-D-circumflex-G-circumflex) differs significantly from zero, procedures for calculating effectiveness are derived. Some generic results on effectiveness are included. These results apply either to the case where (MUF-D-circumflex-G-circumflex) is treated as the single statistic, or to the two-step procedure in which the facility's data are first examined using (D-circumflex+G-circumflex) as
Soriano Pena, A.; Lopez Arroyo, A.; Roesset, J.M.
1976-01-01
The probabilistic and deterministic approaches for calculating the seismic risk of nuclear power plants are both applied to a particular case in Southern Spain. The results obtained by both methods, when varying the input data, are presented and some conclusions drawn in relation to the applicability of the methods, their reliability and their sensitivity to change
A mathematical theory for deterministic quantum mechanics
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Design of deterministic OS for SPLC
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop
2012-01-01
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events
Lana Salameh
2018-01-01
Full Text Available Objectives: Medication errors are considered among the most common causes of morbidity and mortality in hospital setting. Among these errors are discrepancies identified during transfer of patients from one care unit to another, from one physician care to another, or upon patient discharge. Thus, the aims of this study were to identify the prevalence and types of medication discrepancies at the time of hospital admission to a tertiary care teaching hospital in Jordan and to identify risk factors affecting the occurrence of these discrepancies. Methods: A three months prospective observational study was conducted at the department of internal medicine at Jordan university hospital. During the study period, 200 patients were selected using convenience sampling, and a pre-prepared data collection form was used for data collection. Later, a comparison between the pre-admission and admission medication was conducted to identify any possible discrepancies, and all of these discrepancies were discussed with the responsible resident to classify them into intentional (documentation errors or unintentional. Linear regression analysis was performed to assess risk factors associated with the occurrence of unintentional discrepancies. Results: A total of 412 medication discrepancies were identified at the time of hospital admission. Among them, 144 (35% were identified as unintentional while the remaining 268 (65% were identified as intentional discrepancies. Ninety-four patients (47% were found to have at least one unintentional discrepancy and 92 patients (46% had at least one documentation error. Among the unintentional discrepancies, 97 (67% were found to be associated with a potential harm/deterioration to the patients. Increasing patients’ age (beta = 0.195, p-value = .013 and being treated by female residents (beta = 0.139, p-value = .045 were significantly associated with higher number of discrepancies. Conclusion: The prevalence of
Simiu, Emil
2002-01-01
The classical Melnikov method provides information on the behavior of deterministic planar systems that may exhibit transitions, i.e. escapes from and captures into preferred regions of phase space. This book develops a unified treatment of deterministic and stochastic systems that extends the applicability of the Melnikov method to physically realizable stochastic planar systems with additive, state-dependent, white, colored, or dichotomous noise. The extended Melnikov method yields the novel result that motions with transitions are chaotic regardless of whether the excitation is deterministic or stochastic. It explains the role in the occurrence of transitions of the characteristics of the system and its deterministic or stochastic excitation, and is a powerful modeling and identification tool. The book is designed primarily for readers interested in applications. The level of preparation required corresponds to the equivalent of a first-year graduate course in applied mathematics. No previous exposure to d...
In vitro analysis of the marginal adaptation and discrepancy of stainless steel crowns
Mulder, Riaan; Medhat, Rasha; Mohamed, Nadia
2018-01-01
Abstract Aim: The purpose of the study was to assess the marginal adaptation and discrepancy of SSC’s. Differences in adaptation and discrepancy between the four surfaces (mesial, lingual, distal, and buccal) were evaluated. Methods: The placement of stainless steel crowns were completed on a phantom head in accordance with the clinical technique. The ideal tooth preparation was made and this ‘master tooth’ duplicated to achieve a sample size of 15. The stainless steel crowns were placed, trimmed, and cemented as per the clinical technique. The cemented stainless crowns were analyzed under 100× stereomicroscope magnification. The marginal adaptation and discrepancy of each specimen was measured every 2 µm. Results: All the specimens showed marginal adaptation and discrepancy. The lingual margin had a significantly better adaptation (p steel crown adaptation and discrepancy is an essential clinical step. PMID:29536024
Minaret, a deterministic neutron transport solver for nuclear core calculations
Moller, J-Y.; Lautard, J-J.
2011-01-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Minaret, a deterministic neutron transport solver for nuclear core calculations
Moller, J-Y.; Lautard, J-J., E-mail: jean-yves.moller@cea.fr, E-mail: jean-jacques.lautard@cea.fr [CEA - Centre de Saclay , Gif sur Yvette (France)
2011-07-01
We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)
Mechanics from Newton's laws to deterministic chaos
Scheck, Florian
2018-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 150 problems ...
Deterministic Diffusion in Delayed Coupled Maps
Sozanski, M.
2005-01-01
Coupled Map Lattices (CML) are discrete time and discrete space dynamical systems used for modeling phenomena arising in nonlinear systems with many degrees of freedom. In this work, the dynamical and statistical properties of a modified version of the CML with global coupling are considered. The main modification of the model is the extension of the coupling over a set of local map states corresponding to different time iterations. The model with both stochastic and chaotic one-dimensional local maps is studied. Deterministic diffusion in the CML under variation of a control parameter is analyzed for unimodal maps. As a main result, simple relations between statistical and dynamical measures are found for the model and the cases where substituting nonlinear lattices with simpler processes is possible are presented. (author)
Deterministic effects of interventional radiology procedures
Shope, Thomas B.
1997-01-01
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Primality deterministic and primality probabilistic tests
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Scaling limits for the Lego discrepancy
Hameren, Andre van; Kleiss, Ronald
1999-01-01
For the Lego discrepancy with M bins, which is equivalent with a χ 2 -statistic with M bins, we present a procedure to calculate the moment generating function of the probability distribution perturbatively if M and N, the number of uniformly and randomly distributed data points, become large. Furthermore, we present a phase diagram for various limits of the probability distribution in terms of the standardized variable if M and N become infinite
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Hahl, Sayuri K; Kremling, Andreas
2016-01-01
In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
Pfaff, W.; Vos, A.; Hanson, R.
2013-01-01
Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
Recognition of deterministic ETOL languages in logarithmic space
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...
Experimental aspects of deterministic secure quantum key distribution
Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)
2008-07-01
Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.
Yang, Xue; Lau, Joseph T F; Wang, Zixin; Ma, Yee-Ling; Lau, Mason C M
2018-08-01
Masculine role discrepancy and discrepancy stress occur when men perceive that they fail to live up to the ideal manhood derived from societal prescriptions. The present study examined the associations between masculine role discrepancy and two emotional and mental health problems (social anxiety and depressive symptoms), and potential mediation effects through discrepancy stress and self-esteem in a male general population. Based on random population-based sampling, 2000 male residents in Hong Kong were interviewed. Levels of masculine role discrepancy, discrepancy stress, self-esteem, social anxiety, and depressive symptoms were assessed by using validated scales. Results of structural equation modeling analysis indicated that the proposed model fit the sample well. (χ 2 (118) = 832.34, p masculine role discrepancy, discrepancy stress, and emotional/mental health problems. We found that discrepancy stress significantly mediated the association between masculine role discrepancy and social anxiety, while self-esteem significantly mediated the associations between masculine role discrepancy and both social anxiety and depression. Study limitations mainly included the cross-sectional design and reliance on self-reported questionnaires. The associations between masculine discrepancy and social anxiety/depressive symptoms among men may be explained by the increase in discrepancy stress and decrease in self-esteem. The findings suggest needs and directions for future research for the relationship between masculine role discrepancy and men's mental health, mechanisms involved, and interventions for improvement. Copyright © 2018. Published by Elsevier B.V.
Impact of Computerized Order Entry to Pharmacy Interface on Order-Infusion Pump Discrepancies
Rebecca A. Russell
2015-01-01
Full Text Available Background. The ability of safety technologies to decrease errors, harm, and risk to patients has yet to be demonstrated consistently. Objective. To compare discrepancies between medication and intravenous fluid (IVF orders and bedside infusion pump settings within a pediatric intensive care unit (PICU before and after implementation of an interface between computerized physician order entry (CPOE and pharmacy systems. Methods. Within a 72-bed PICU, medication and IVF orders in the CPOE system and bedside infusion pump settings were collected. Rates of discrepancy were calculated and categorized by type. Results were compared to a study conducted prior to interface implementation. Expansion of PICU also occurred between study periods. Results. Of 455 observations, discrepancy rate decreased for IVF (p=0.01 compared to previous study. Overall discrepancy rate for medications was unchanged; however, medications infusing without an order decreased (p<0.01, and orders without corresponding infusion increased (p<0.05. Conclusions. Following implementation of an interface between CPOE and pharmacy systems, fewer discrepancies between IVF orders and infusion pump settings were observed. Discrepancies for medications did not change, and some types of discrepancies increased. In addition to interface implementation, changes in healthcare delivery and workflow related to ICU expansion contributed to observed changes.
Deterministic calculations of radiation doses from brachytherapy seeds
Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos
2009-01-01
Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)
Prospects in deterministic three dimensional whole-core transport calculations
Sanchez, Richard
2012-01-01
The point we made in this paper is that, although detailed and precise three-dimensional (3D) whole-core transport calculations may be obtained in the future with massively parallel computers, they would have an application to only some of the problems of the nuclear industry, more precisely those regarding multiphysics or for methodology validation or nuclear safety calculations. On the other hand, typical design reactor cycle calculations comprising many one-point core calculations can have very strict constraints in computing time and will not directly benefit from the advances in computations in large scale computers. Consequently, in this paper we review some of the deterministic 3D transport methods which in the very near future may have potential for industrial applications and, even with low-order approximations such as a low resolution in energy, might represent an advantage as compared with present industrial methodology, for which one of the main approximations is due to power reconstruction. These methods comprise the response-matrix method and methods based on the two-dimensional (2D) method of characteristics, such as the fusion method.
Feng HE
2017-12-01
Full Text Available The state of the art avionics system adopts switched networks for airborne communications. A major concern in the design of the networks is the end-to-end guarantee ability. Analytic methods have been developed to compute the worst-case delays according to the detailed configurations of flows and networks within avionics context, such as network calculus and trajectory approach. It still lacks a relevant method to make a rapid performance estimation according to some typically switched networking features, such as networking scale, bandwidth utilization and average flow rate. The goal of this paper is to establish a deterministic upper bound analysis method by using these networking features instead of the complete network configurations. Two deterministic upper bounds are proposed from network calculus perspective: one is for a basic estimation, and another just shows the benefits from grouping strategy. Besides, a mathematic expression for grouping ability is established based on the concept of network connecting degree, which illustrates the possibly minimal grouping benefit. For a fully connected network with 4 switches and 12 end systems, the grouping ability coming from grouping strategy is 15â20%, which just coincides with the statistical data (18â22% from the actual grouping advantage. Compared with the complete network calculus analysis method for individual flows, the effectiveness of the two deterministic upper bounds is no less than 38% even with remarkably varied packet lengths. Finally, the paper illustrates the design process for an industrial Avionics Full DupleX switched Ethernet (AFDX networking case according to the two deterministic upper bounds and shows that a better control for network connecting, when designing a switched network, can improve the worst-case delays dramatically. Keywords: Deterministic bound, Grouping ability, Network calculus, Networking features, Switched networks
Deterministic dense coding and entanglement entropy
Bourdon, P. S.; Gerjuoy, E.; McDonald, J. P.; Williams, H. T.
2008-01-01
We present an analytical study of the standard two-party deterministic dense-coding protocol, under which communication of perfectly distinguishable messages takes place via a qudit from a pair of nonmaximally entangled qudits in a pure state |ψ>. Our results include the following: (i) We prove that it is possible for a state |ψ> with lower entanglement entropy to support the sending of a greater number of perfectly distinguishable messages than one with higher entanglement entropy, confirming a result suggested via numerical analysis in Mozes et al. [Phys. Rev. A 71, 012311 (2005)]. (ii) By explicit construction of families of local unitary operators, we verify, for dimensions d=3 and d=4, a conjecture of Mozes et al. about the minimum entanglement entropy that supports the sending of d+j messages, 2≤j≤d-1; moreover, we show that the j=2 and j=d-1 cases of the conjecture are valid in all dimensions. (iii) Given that |ψ> allows the sending of K messages and has √(λ 0 ) as its largest Schmidt coefficient, we show that the inequality λ 0 ≤d/K, established by Wu et al. [Phys. Rev. A 73, 042311 (2006)], must actually take the form λ 0 < d/K if K=d+1, while our constructions of local unitaries show that equality can be realized if K=d+2 or K=2d-1
Analysis of F-16 radar discrepancies
Riche, K. A.
1982-12-01
One hundred and eight aircraft were randomly selected from three USAF F-16 bases and examined. These aircraft included 63 single-seat F-16As and 45 two-seat F-16Bs and encompassed 8,525 sorties and 748 radar system write-ups. Programs supported by the Statistical Package for the Social Sciences (SPSS) were run on the data. Of the 748 discrepancies, over one-third of them occurred within three sorties of each other and half within six sorties. Sixteen percent of all aircraft which had a discrepancy within three sorties had another write-up within the next three sorties. Designated repeat/recurring write-ups represented one-third of all the instances in which the write-up separation interval was three sorties or less. This is an indication that maintenance is unable to correct equipment failures as they occur, most likely because the false alarm rate is too high and maintenance is unable to duplicate the error conditions on the ground for correct error diagnosis.
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
Equivalence relations between deterministic and quantum mechanical systems
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
Operational State Complexity of Deterministic Unranked Tree Automata
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
Further Investigations of NIST Water Sphere Discrepancies
Broadhead, B.L.
2001-01-01
Measurements have been performed on a family of water spheres at the National Institute of Standards and Technology (NIST) facilities. These measurements are important for criticality safety studies in that, frequently, difficulties have arisen in predicting the reactivity of individually subcritical components assembled in a critical array. It has been postulated that errors in the neutron leakage from individual elements in the array could be responsible for these problems. In these NIST measurements, an accurate determination of the leakage from a fission spectrum, modified by water scattering, is available. Previously, results for 3-, 4-, and 5-in. diam. water-filled spheres, both with and without cadmium covers over the fission chambers, were presented for four fissionable materials: 235 U, 238 U, 237 Np, and 239 Pu. Results were also given for ''dry'' systems, in which the water spheres were drained of water, with the results corresponding to essentially measurements of unmoderated 252 Cf spontaneous-fission neutrons. The calculated-to-experimental (C/E) values ranged from 0.94 to 1.01 for the dry systems and 0.93 to 1.05 for the wet systems, with experimental uncertainties ranging from 1.5 to 1.9%. These results indicated discrepancies that were clearly outside of the experimental uncertainties, and further investigation was suggested. This work updates the previous calculations with a comparison of the predicted C/E values with ENDF/B-V and ENDF/B-VI transport cross sections. Variations in the predicted C/E values that arise from the use of ENDF/B-V, ENDF/B-VI, ENDL92, and LLLDOS for the response fission cross sections are also tabulated. The use of both a 45-group NIST fission spectrum and a continuous-energy fission spectrum for 252 Cf are evaluated. The use of the generalized-linear-least-squares (GLLSM) procedures to investigate the reported discrepancies in the water sphere results for 235 U, 238 U, 239 Pu, and 237 Np is reported herein. These studies
Discrepancy between Clambda and Csub(E)
Williams, P.C.
1977-01-01
The conversion factors Clambda and Csub(E) are used in relating ionization chamber readings (M) to absorbed dose in water for measurements made in phantoms irradiated with photons of quality lambda and electrons of mean energy - E respectively. New calculations of Clambda (Nahum, A.E., and Greening, J.R., 1976, Phys. Med. Biol., vol.21, 862) have yielded values which differ by up to 5% from those quoted by ICRU (ICRU, 1969, Report 14, ICRU Publications, P.O. Box 30165, Washington, DC 20014). Nahum and Greening have also pointed out that the recommended values of Clambda and Csub(E) for radiations of approximately the same primary electron energy should be the same, but differ by approximately 4%. Alternative explanations are offered for these discrepancies. If the ICRU values are corrected for the perturbation of the electron flux in the phantom by the introduction of a cavity, the ionization chamber, into the phantom, then the resulting values are in good agreement with those quoted by Nahum and Greening. The discrepancy between Clambda and Csub(E) is the result of inconsistent definitions. The ICRU definition of Csub(E) leads to a dose conversion factor which is dimensionally correct but is based on the assumption that the product M.Nsub(c), where Nsub(c) is the exposure calibration factor for the ionization chamber at the calibration quality, 2MV, can be identified as exposure, whereas this is only true at the calibration quality. More accurate definitions of Clambda and Csub(E) are therefore proposed. (U.K.)
Deterministic Echo State Networks Based Stock Price Forecasting
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
The cointegrated vector autoregressive model with general deterministic terms
Johansen, Søren; Nielsen, Morten Ørregaard
2017-01-01
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....
The cointegrated vector autoregressive model with general deterministic terms
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
Pseudo-random number generator based on asymptotic deterministic randomness
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Non deterministic finite automata for power systems fault diagnostics
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
The probabilistic approach and the deterministic licensing procedure
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha
2015-01-01
To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.
Deterministic chaos in the pitting phenomena of passivable alloys
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
Ogata, Norio
2006-09-01
The strategy to eliminate hepatitis B virus (HBV) infection by administrating an HB vaccine is changing worldwide; however, this is not the case in Japan. An important concern about the HBV infection-preventing strategy in Japan may be that the assay methods for the antibody to hepatitis B surface antigen (anti-HBs) are not standardized. The minimum protective anti-HBs titer against HBV infection has been established as 10 mIU/ml by World Health Organization (WHO) -standardized assay methods worldwide, but that is still determined as a "positive" test result by the passive hemagglutination (PHA) method in Japan. We compared anti-HBs measurements in given samples among PHA(Mycell II, Institute of Immunology), chemiluminescent enzyme immunoassay (CLEIA) (Lumipulse, Fujirebio), and chemiluminescent immunoassay (CLIA) (Architect, Abbott), all of which are currently in wide use in Japan. First, anti-HBs measurements in serum from individuals who received a yeast-derived recombinant HB vaccine composed of the major surface protein of either subtype adr or subtype ayw were compared. The results clearly showed that in subtype adr-vaccinees CLIA underestimated the anti-HBs amount compared with CLEIA and PHA, but in ayw-vaccinees, the discordance in the measurements among the three kits was not prominent. Second, anti-HBs measurements in standard or calibration solutions of each assay kit were compared. Surprisingly, CLEIA showed higher measurements in all three kit-associated standard or calibration solutions than CLIA. Thus, the anti-HBs titer of 10 mIU/ml is difficult to introduce in Japan as the minimum protective level against HBV infection. Efforts to standardize anti-HBs assay methods are expected to share international evidence about the HBV infection-preventing strategy.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Cultural estrangement: the role of personal and societal value discrepancies.
Bernard, Mark M; Gebauer, Jochen E; Maio, Gregory R
2006-01-01
Study 1 examined whether cultural estrangement arises from discrepancies between personal and societal values (e.g., freedom) rather than from discrepancies in attitudes toward political (e.g., censorship) or mundane (e.g., pizza) objects. The relations between different types of value discrepancies, estrangement, subjective well-being, and need for uniqueness also were examined. Results indicated that personal-societal discrepancies in values and political attitudes predicted estrangement, whereas mundane attitude discrepancies were not related to estrangement. As expected, value discrepancies were the most powerful predictor of estrangement. Value discrepancies were not related to subjective well-being but fulfilled a need for uniqueness. Study 2 replicated the relations between value discrepancies, subjective well-being, and need for uniqueness while showing that a self-report measure of participants' values and a peer-report measure of the participants' values yielded the same pattern of value discrepancies. Together, the studies reveal theoretical and empirical benefits of conceptualizing cultural estrangement in terms of value discrepancies.
Deterministic transfer of two-dimensional materials by all-dry viscoelastic stamping
Castellanos-Gomez, Andres; Buscema, Michele; Molenaar, Rianda; Singh, Vibhor; Janssen, Laurens; Van der Zant, Herre S J; Steele, Gary A
2014-01-01
The deterministic transfer of two-dimensional crystals constitutes a crucial step towards the fabrication of heterostructures based on the artificial stacking of two-dimensional materials. Moreover, controlling the positioning of two-dimensional crystals facilitates their integration in complex devices, which enables the exploration of novel applications and the discovery of new phenomena in these materials. To date, deterministic transfer methods rely on the use of sacrificial polymer layers and wet chemistry to some extent. Here, we develop an all-dry transfer method that relies on viscoelastic stamps and does not employ any wet chemistry step. This is found to be very advantageous to freely suspend these materials as there are no capillary forces involved in the process. Moreover, the whole fabrication process is quick, efficient, clean and it can be performed with high yield. (letter)
Deterministic effects of the ionizing radiation
Raslawski, Elsa C.
2001-01-01
Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of
Efficient Asymptotic Preserving Deterministic methods for the Boltzmann Equation
2011-04-01
35 ∗Department of Mathematics, University of Ferrara , Ferrara , Italy †Department of Mathematics and...Department of Mathematics, University of Ferrara , Ferrara , Italy 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND...Computer Science, University of Catania, Catania, Italy VKI - 1 - RTO-EN-AVT-194 8 - 1 Report Documentation Page Form ApprovedOMB No. 0704
Quadratic Finite Element Method for 1D Deterministic Transport
Tolar, D R Jr.; Ferguson, J M
2004-01-01
In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ((und r)) and angular ((und (Omega))) dependences on the angular flux ψ(und r),(und (Omega))are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of ψ(und r),(und (Omega)). Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable (μ) in developing the one-dimensional (1D) spherical geometry S N equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S N algorithms
Activity modes selection for project crashing through deterministic simulation
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Unresolved resonance self shielding calculation: causes and importance of discrepancies
Ribon, P.; Tellier, H.
1986-09-01
To compute the self shielding coefficient, it is necessary to know the point-wise cross-sections. In the unresolved resonance region, we do not know the parameters of each level but only the average parameters. Therefore we simulate the point-wise cross-section by random sampling of the energy levels and resonance parameters with respect to the Wigner law and the X 2 distributions, and by computing the cross-section in the same way as in the resolved regions. The result of this statistical calculation obviously depends on the initial parameters but also on the method of sampling, on the formalism which is used to compute the cross-section or on the weighting neutron flux. In this paper, we will survey the main phenomena which can induce discrepancies in self shielding computations. Results are given for typical dilutions which occur in nuclear reactors. 8 refs
Unresolved resonance self shielding calculation: causes and importance of discrepancies
Ribon, P.; Tellier, H.
1986-01-01
To compute the self shielding coefficient, it is necessary to know the point-wise cross-sections. In the unresolved resonance region, the parameters of each level are not known; only the average parameters. Therefore the authors simulate the point-wise cross-section by random sampling of the energy levels and resonance parameters with respect to the Wigner law and the x 2 distributions, and by computing the cross-section in the same way as in the resolved regions. The result of this statistical calculation obviously depends on the initial parameters but also on the method of sampling, on the formalism which is used to compute the cross-section or on the weighting neutron flux. In this paper, the authors survey the main phenomena which can induce discrepancies in self shielding computations. Results are given for typical dilutions which occur in nuclear reactors
Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung-Hoi
2016-01-01
Background ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. Methods In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to...
Deterministic sensitivity and uncertainty analysis for large-scale computer models
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab
Offshore platforms and deterministic ice actions: Kashagan phase 2 development: North Caspian Sea.
Croasdale, Ken [KRCA, Calgary (Canada); Jordaan, Ian [Ian Jordaan and Associates, St John' s (Canada); Verlaan, Paul [Shell Development Kashagan, London (United Kingdom)
2011-07-01
The Kashagan development has to face the difficult conditions of the northern Caspian Sea. This paper investigated ice interaction scenarios and deterministic methods used on platform designs for the Kashagan development. The study presents first a review of the types of platforms in use and being designed for the Kashagan development. The various ice load scenarios and the structures used in each case are discussed. Vertical faced barriers, mobile drilling barges and sheet pile islands were used for the ice loads on vertical structures. Sloping faced barriers and islands of rock were used for the ice loads on sloping structures. Deterministic models such as the model in ISO 19906 were used to calculate the loads occurring with or without ice rubble in front of the structure. The results showed the importance of rubble build-up in front of wide structures in shallow water. Recommendations were provided for building efficient vertical and sloping faced barriers.
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
Cristoforo Demartino
2018-01-01
Full Text Available This paper presents a numerical study on the deterministic and probabilistic serviceability assessment of footbridge vibrations due to a single walker crossing. The dynamic response of the footbridge is analyzed by means of modal analysis, considering only the first lateral and vertical modes. Single span footbridges with uniform mass distribution are considered, with different values of the span length, natural frequencies, mass, and structural damping and with different support conditions. The load induced by a single walker crossing the footbridge is modeled as a moving sinusoidal force either in the lateral or in the vertical direction. The variability of the characteristics of the load induced by walkers is modeled using probability distributions taken from the literature defining a Standard Population of walkers. Deterministic and probabilistic approaches were adopted to assess the peak response. Based on the results of the simulations, deterministic and probabilistic vibration serviceability assessment methods are proposed, not requiring numerical analyses. Finally, an example of the application of the proposed method to a truss steel footbridge is presented. The results highlight the advantages of the probabilistic procedure in terms of reliability quantification.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
Zhu, T; Finlay, J; Mesina, C; Liu, H
2014-01-01
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium
Discrepancies in Parents' and Children's Reports of Child Emotion Regulation
Hourigan, Shannon E.; Goodman, Kimberly L.; Southam-Gerow, Michael A.
2011-01-01
The ability to regulate one's emotions effectively has been linked with many aspects of well-being. The current study examined discrepancies between mothers' and children's reports of child emotion regulation. This investigation examined patterns of discrepancies for key aspects of emotion regulation (i.e., inhibition and dysregulated expression)…
Prasad, Soni; Lee, Damian J; Yuan, Judy Chia-Chun; Barao, Valentim A R; Shyamsunder, Nodesh; Sukotjo, Cortino
2012-01-01
Purpose. The purpose of this study was to evaluate the discrepancies between abstracts presented at the IADR meeting (2004-2005) and their full-text publication. Material and Methods. Abstracts from the Prosthodontic Section of IADR meeting were obtained. The following information was collected: abstract title, number of authors, study design, statistical analysis, outcome, and funding source. PubMed was used to identify the full-text publication of the abstracts. The discrepancies between the abstract and the full-text publication were examined, categorized as major and minor discrepancies, and quantified. The data were collected and analyzed using descriptive analysis. Frequency and percentage of major and minor discrepancies were calculated. Results. A total of 109 (95.6%) articles showed changes from their abstracts. Seventy-four (65.0%) and 105 (92.0%) publications had at least one major and one minor discrepancies, respectively. Minor discrepancies were more prevalent (92.0%) than major discrepancies (65.0%). The most common minor discrepancy was observed in the title (80.7%), and most common major discrepancies were seen in results (48.2%). Conclusion. Minor discrepancies were more prevalent than major discrepancies. The data presented in this study may be useful to establish a more comprehensive structured abstract requirement for future meetings.
Towards deterministic optical quantum computation with coherently driven atomic ensembles
Petrosyan, David
2005-01-01
Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-01-01
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
Hu Bing; Ye Binbin; Yang Yang; Zhu Kangshun; Kang Zhuang; Kuang Sichi; Luo Lin; Shan Hong
2011-01-01
Purpose: Our aim was to study the quantitative fiber tractography variations and patterns in patients with relapsing-remitting multiple sclerosis (RRMS) and to assess the correlation between quantitative fiber tractography and Expanded Disability Status Scale (EDSS). Material and methods: Twenty-eight patients with RRMS and 28 age-matched healthy volunteers underwent a diffusion tensor MR imaging study. Quantitative deterministic and probabilistic fiber tractography were generated in all subjects. And mean numbers of tracked lines and fiber density were counted. Paired-samples t tests were used to compare tracked lines and fiber density in RRMS patients with those in controls. Bivariate linear regression model was used to determine the relationship between quantitative fiber tractography and EDSS in RRMS. Results: Both deterministic and probabilistic tractography's tracked lines and fiber density in RRMS patients were less than those in controls (P < .001). Both deterministic and probabilistic tractography's tracked lines and fiber density were found negative correlations with EDSS in RRMS (P < .001). The fiber tract disruptions and reductions in RRMS were directly visualized on fiber tractography. Conclusion: Changes of white matter tracts can be detected by quantitative diffusion tensor fiber tractography, and correlate with clinical impairment in RRMS.
Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))
2008-07-01
SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants
Deterministic and heuristic models of forecasting spare parts demand
Ivan S. Milojević
2012-04-01
Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing
Savenkov, S M
2002-01-01
Using the Mueller matrix representation in the basis of the matrices of amplitude and phase anisotropies, a generalized solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices, which have been measured by method of three input polarization, is obtained.
Savenkov, S.M.; Oberemok, Je.A.
2002-01-01
Using the Mueller matrix representation in the basis of the matrices of amplitude and phase anisotropies, a generalized solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices, which have been measured by method of three input polarization, is obtained
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Deterministic Predictions of Vessel Responses Based on Past Measurements
Nielsen, Ulrik Dam; Jensen, Jørgen Juncher
2017-01-01
The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...
About the Possibility of Creation of a Deterministic Unified Mechanics
Khomyakov, G.K.
2005-01-01
The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered
Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems
Harremoës, Poul; Carstensen, Niels Jacob
1994-01-01
An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...
Deterministic multimode photonic device for quantum-information processing
Nielsen, Anne E. B.; Mølmer, Klaus
2010-01-01
We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...
Deterministic Chaos - Complex Chance out of Simple Necessity ...
This is a very lucid and lively book on deterministic chaos. Chaos is very common in nature. However, the understanding and realisation of its potential applications is very recent. Thus this book is a timely addition to the subject. There are several books on chaos and several more are being added every day. In spite of this ...
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of
Deterministic teleportation using single-photon entanglement as a resource
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Empirical and deterministic accuracies of across-population genomic prediction
Wientjes, Y.C.J.; Veerkamp, R.F.; Bijma, P.; Bovenhuis, H.; Schrooten, C.; Calus, M.P.L.
2015-01-01
Background: Differences in linkage disequilibrium and in allele substitution effects of QTL (quantitative trait loci) may hinder genomic prediction across populations. Our objective was to develop a deterministic formula to estimate the accuracy of across-population genomic prediction, for which
A Deterministic Approach to the Synchronization of Cellular Automata
Garcia, J.; Garcia, P.
2011-01-01
In this work we introduce a deterministic scheme of synchronization of linear and nonlinear cellular automata (CA) with complex behavior, connected through a master-slave coupling. By using a definition of Boolean derivative, we use the linear approximation of the automata to determine a function of coupling that promotes synchronization without perturbing all the sites of the slave system.
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...
Mixed motion in deterministic ratchets due to anisotropic permeability
Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.
2011-01-01
Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we
Deterministic mean-variance-optimal consumption and investment
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Langevin equation with the deterministic algebraically correlated noise
Ploszajczak, M.; Srokowski, T.
1995-01-01
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author)
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-01-01
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Deterministic algorithms for multi-criteria Max-TSP
Manthey, Bodo
2012-01-01
We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
Madden, Lauren; Seifried, Joyce; Farnum, Kerry; D'Armiento, Angela
2016-01-01
Discrepant events are often used by science educators to incite interest and excitement in learners, yet sometimes their results are farther-reaching. The following article describes how one such event--dissolving packing peanuts in acetone--led to a change in the course of a college-level elementary science teaching methods class and to the…
Hameren, Andreas Ferdinand Willem van
2001-01-01
Discrepancies play an important role in the study of uniformity properties of point sets. Their probability distributions are a help in the analysis of the efficiency of the Quasi Monte Carlo method of numerical integration, which uses point sets that are distributed more uniformly than sets of
Reamy, Allison M.; Kim, Kyungmin; Zarit, Steven H.; Whitlatch, Carol J.
2011-01-01
Purpose of the Study: We explore discrepancies in perceptions of values and care preferences between individuals with dementia (IWDs) and their family caregivers. Design and Methods: We interviewed 266 dyads consisting of an individual with mild to moderate dementia and his or her family caregiver to determine IWDs' beliefs for 5 values related to…
Key, A. P.; Yoder, P. J.; Stone, W. L.
2016-01-01
Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
Effect of EHR user interface changes on internal prescription discrepancies.
Turchin, A; Sawarkar, A; Dementieva, Y A; Breydo, E; Ramelson, H
2014-01-01
To determine whether specific design interventions (changes in the user interface (UI)) of an electronic health record (EHR) medication module are associated with an increase or decrease in the incidence of contradictions between the structured and narrative components of electronic prescriptions (internal prescription discrepancies). We performed a retrospective analysis of 960,000 randomly selected electronic prescriptions generated in a single EHR between 01/2004 and 12/2011. Internal prescription discrepancies were identified using a validated natural language processing tool with recall of 76% and precision of 84%. A multivariable autoregressive integrated moving average (ARIMA) model was used to evaluate the effect of five UI changes in the EHR medication module on incidence of internal prescription discrepancies. Over the study period 175,725 (18.4%) prescriptions were found to have internal discrepancies. The highest rate of prescription discrepancies was observed in March 2006 (22.5%) and the lowest in March 2009 (15.0%). Addition of "as directed" option to the dropdown decreased prescription discrepancies by 195 / month (p = 0.0004). An non-interruptive alert that reminded providers to ensure that structured and narrative components did not contradict each other decreased prescription discrepancies by 145 / month (p = 0.03). Addition of a "Renew / Sign" button to the Medication module (a negative control) did not have an effect in prescription discrepancies. Several UI changes in the electronic medication module were effective in reducing the incidence of internal prescription discrepancies. Further research is needed to identify interventions that can completely eliminate this type of prescription error and their effects on patient outcomes.
Marchand, E
2007-12-15
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
WIMSD5, Deterministic Multigroup Reactor Lattice Calculations
2004-01-01
1 - Description of program or function: The Winfrith improved multigroup scheme (WIMS) is a general code for reactor lattice cell calculation on a wide range of reactor systems. In particular, the code will accept rod or plate fuel geometries in either regular arrays or in clusters and the energy group structure has been chosen primarily for thermal calculations. The basic library has been compiled with 14 fast groups, 13 resonance groups and 42 thermal groups, but the user is offered the choice of accurate solutions in many groups or rapid calculations in few groups. Temperature dependent thermal scattering matrices for a variety of scattering laws are included in the library for the principal moderators which include hydrogen, deuterium, graphite, beryllium and oxygen. WIMSD5 is a successor version of WIMS-D/4. 2 - Method of solution: The treatment of resonances is based on the use of equivalence theorems with a library of accurately evaluated resonance integrals for equivalent homogeneous systems at a variety of temperatures. The collision theory procedure gives accurate spectrum computations in the 69 groups of the library for the principal regions of the lattice using a simplified geometric representation of complicated lattice cells. The computed spectra are then used for the condensation of cross-sections to the number of groups selected for solution of the transport equation in detailed geometry. Solution of the transport equation is provided either by use of the Carlson DSN method or by collision probability methods. Leakage calculations including an allowance for streaming asymmetries may be made using either diffusion theory or the more elaborate B1-method. The output of the code provides Eigenvalues for the cases where a simple buckling mode is applicable or cell-averaged parameters for use in overall reactor calculations. Various reaction rate edits are provided for direct comparison with experimental measurements. 3 - Restrictions on the complexity of
Deterministic calculation of grey Dancoff factors in cluster cells with cylindrical outer boundaries
Jenisch Rodrigues, L.; Tullio de Vilhena, M.
2008-01-01
In the present work, the WIMSD code routine PIJM is modified to compute deterministic Dancoff factors by the collision probability definition in general arrangements of partially absorbing fuel rods. Collision probabilities are calculated by an efficient integration scheme of the third-order Bickley functions, which considers each cell region separately. The effectiveness of the method is assessed by comparing grey Dancoff factors as calculated by PIJM, with those available in the literature by the Monte Carlo method, for the irregular geometry of the Canadian CANDU and CANFLEX assemblies. Dancoff factors at several different fuel pin positions are found in very good agreement with the literature results. (orig.)
Nan Nwe Win
2017-01-01
Full Text Available Determination of hepatitis C virus (HCV genotypes plays an important role in the direct-acting agent era. Discrepancies between HCV genotyping and serotyping assays are occasionally observed. Eighteen samples with discrepant results between genotyping and serotyping methods were analyzed. HCV serotyping and genotyping were based on the HCV nonstructural 4 (NS4 region and 5′-untranslated region (5′-UTR, respectively. HCV core and NS4 regions were chosen to be sequenced and were compared with the genotyping and serotyping results. Deep sequencing was also performed for the corresponding HCV NS4 regions. Seventeen out of 18 discrepant samples could be sequenced by the Sanger method. Both HCV core and NS4 sequences were concordant with that of genotyping in the 5′-UTR in all 17 samples. In cloning analysis of the HCV NS4 region, there were several amino acid variations, but each sequence was much closer to the peptide with the same genotype. Deep sequencing revealed that minor clones with different subgenotypes existed in two of the 17 samples. Genotyping by genome amplification showed high consistency, while several false reactions were detected by serotyping. The deep sequencing method also provides accurate genotyping results and may be useful for analyzing discrepant cases. HCV genotyping should be correctly determined before antiviral treatment.
Santamarina, A.
1991-01-01
A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)
Kraus SK
2017-06-01
Full Text Available Objectives: To evaluate the impact of a pharmacy-technician centered medication reconciliation (PTMR program by identifying and quantifying medication discrepancies and outcomes of pharmacist medication reconciliation recommendations. Methods: A retrospective chart review was performed on two-hundred patients admitted to the internal medicine teaching services at Cooper University Hospital in Camden, NJ. Patients were selected using a stratified systematic sample approach and were included if they received a pharmacy technician medication history and a pharmacist medication reconciliation at any point during their hospital admission. Pharmacist identified medication discrepancies were analyzed using descriptive statistics, bivariate analyses. Potential risk factors were identified using multivariate analyses, such as logistic regression and CART. The priority level of significance was set at 0.05. Results: Three-hundred and sixty-five medication discrepancies were identified out of the 200 included patients. The four most common discrepancies were omission (64.7%, non-formulary omission (16.2%, dose discrepancy (10.1%, and frequency discrepancy (4.1%. Twenty-two percent of pharmacist recommendations were implemented by the prescriber within 72 hours. Conclusion: A PTMR program with dedicated pharmacy technicians and pharmacists identifies many medication discrepancies at admission and provides opportunities for pharmacist reconciliation recommendations.
Prevalence of Gender Discrepancy in Internet Use in Nigeria ...
Nekky Umera
essence, the research sought to determine the prevalence of gender discrepancies in .... that the proportion of women Internet users in developing countries is much smaller than that of ..... equality: a contradiction in terminis? Computers in ...
Generic programming for deterministic neutron transport codes
Plagne, L.; Poncot, A.
2005-01-01
This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)
Error and discrepancy in radiology: inevitable or avoidable?
Brady, Adrian P.
2016-01-01
Abstract Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3?5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms ?error? and ?discrepancy? and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and ...
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-01-01
ABSTRACT Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton’s ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. Aim: To determine any difference in tooth size discrepancy in a...
The INDC/NEANDC joint discrepancy file 1990
Patrick, B.H.; Kocherov, N.P.
1990-06-01
The International Nuclear Data Committee (INDC) and the Nuclear Energy Agency Nuclear Data Committee (NEANDC) maintain a close interest in nuclear data which exhibit discrepancies and, by making known the details of the disagreements, encouragement is given to the undertaking of new measurements as a means of eliminating the ambiguities. The previous discrepancy file was published by NEANDC and INDC in 1984. This document contains 10 papers and a separate abstract was prepared for each of them. Refs, figs and tabs
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Learning about physical parameters: the importance of model discrepancy
Brynjarsdóttir, Jenný; O'Hagan, Anthony
2014-01-01
Science-based simulation models are widely used to predict the behavior of complex physical systems. It is also common to use observations of the physical system to solve the inverse problem, that is, to learn about the values of parameters within the model, a process which is often called calibration. The main goal of calibration is usually to improve the predictive performance of the simulator but the values of the parameters in the model may also be of intrinsic scientific interest in their own right. In order to make appropriate use of observations of the physical system it is important to recognize model discrepancy, the difference between reality and the simulator output. We illustrate through a simple example that an analysis that does not account for model discrepancy may lead to biased and over-confident parameter estimates and predictions. The challenge with incorporating model discrepancy in statistical inverse problems is being confounded with calibration parameters, which will only be resolved with meaningful priors. For our simple example, we model the model-discrepancy via a Gaussian process and demonstrate that through accounting for model discrepancy our prediction within the range of data is correct. However, only with realistic priors on the model discrepancy do we uncover the true parameter values. Through theoretical arguments we show that these findings are typical of the general problem of learning about physical parameters and the underlying physical system using science-based mechanistic models. (paper)
Cervical and incisal marginal discrepancy in ceramic laminate veneering materials: A SEM analysis
Hemalatha Ranganathan
2017-01-01
Full Text Available Context: Marginal discrepancy influenced by the choice of processing material used for the ceramic laminate veneers needs to be explored further for better clinical application. Aims: This study aimed to evaluate the amount of cervical and incisal marginal discrepancy associated with different ceramic laminate veneering materials. Settings and Design: This was an experimental, single-blinded, in vitro trial. Subjects and Methods: Ten central incisors were prepared for laminate veneers with 2 mm uniform reduction and heavy chamfer finish line. Ceramic laminate veneers fabricated over the prepared teeth using four different processing materials were categorized into four groups as Group I - aluminous porcelain veneers, Group II - lithium disilicate ceramic veneers, Group III - lithium disilicate-leucite-based veneers, Group IV - zirconia-based ceramic veneers. The cervical and incisal marginal discrepancy was measured using a scanning electron microscope. Statistical Analysis Used: ANOVA and post hoc Tukey honest significant difference (HSD tests were used for statistical analysis. Results: The cervical and incisal marginal discrepancy for four groups was Group I - 114.6 ± 4.3 μm, 132.5 ± 6.5 μm, Group II - 86.1 ± 6.3 μm, 105.4 ± 5.3 μm, Group III - 71.4 ± 4.4 μm, 91.3 ± 4.7 μm, and Group IV - 123.1 ± 4.1 μm, 142.0 ± 5.4 μm. ANOVA and post hoc Tukey HSD tests observed a statistically significant difference between the four test specimens with regard to cervical marginal discrepancy. The cervical and incisal marginal discrepancy scored F = 243.408, P < 0.001 and F = 180.844, P < 0.001, respectively. Conclusion: This study concluded veneers fabricated using leucite reinforced lithium disilicate exhibited the least marginal discrepancy followed by lithium disilicate ceramic, aluminous porcelain, and zirconia-based ceramics. The marginal discrepancy was more in the incisal region than in the cervical region in all the groups.
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Talamo, A.; Gohar, Y.; Aliberti, G.; Zhong, Z.; Bournos, V.; Fokov, Y.; Kiyavitskaya, H.; Routkovskaya, C.; Serafimovich, I.
2010-01-01
In 1997, Bretscher calculated the effective delayed neutron fraction by the k-ratio method. The Bretscher's approach is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Bretscher evaluated the effective delayed neutron fraction as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as k-ratio method). In the present work, the k-ratio method is applied by deterministic nuclear codes. The ENDF/B nuclear data library of the fuel isotopes ( 238 U and 238 U) have been processed by the NJOY code with and without the delayed neutron data to prepare multigroup WIMSD nuclear data libraries for the DRAGON code. The DRAGON code has been used for preparing the PARTISN macroscopic cross sections. This calculation methodology has been applied to the YALINA-Thermal assembly of Belarus. The assembly has been modeled and analyzed using PARTISN code with 69 energy groups and 60 different material zones. The deterministic and Monte Carlo results for the effective delayed neutron fraction obtained by the k-ratio method agree very well. The results also agree with the values obtained by using the adjoint flux. (authors)
A deterministic-probabilistic model for contaminant transport. User manual
Schwartz, F W; Crowe, A
1980-08-01
This manual describes a deterministic-probabilistic contaminant transport (DPCT) computer model designed to simulate mass transfer by ground-water movement in a vertical section of the earth's crust. The model can account for convection, dispersion, radioactive decay, and cation exchange for a single component. A velocity is calculated from the convective transport of the ground water for each reference particle in the modeled region; dispersion is accounted for in the particle motion by adding a readorn component to the deterministic motion. The model is sufficiently general to enable the user to specify virtually any type of water table or geologic configuration, and a variety of boundary conditions. A major emphasis in the model development has been placed on making the model simple to use, and information provided in the User Manual will permit changes to the computer code to be made relatively easily for those that might be required for specific applications. (author)
Deterministic chaos at the ocean surface: applications and interpretations
A. J. Palmer
1998-01-01
Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.
Deterministic Properties of Serially Connected Distributed Lag Models
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Deterministic Brownian motion generated from differential delay equations.
Lei, Jinzhi; Mackey, Michael C
2011-10-01
This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.
Deterministic blade row interactions in a centrifugal compressor stage
Kirtley, K. R.; Beach, T. A.
1991-01-01
The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.
One-step deterministic multipartite entanglement purification with linear optics
Sheng, Yu-Bo [Department of Physics, Tsinghua University, Beijing 100084 (China); Long, Gui Lu, E-mail: gllong@tsinghua.edu.cn [Department of Physics, Tsinghua University, Beijing 100084 (China); Center for Atomic and Molecular NanoSciences, Tsinghua University, Beijing 100084 (China); Key Laboratory for Quantum Information and Measurements, Beijing 100084 (China); Deng, Fu-Guo [Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875 (China)
2012-01-09
We present a one-step deterministic multipartite entanglement purification scheme for an N-photon system in a Greenberger–Horne–Zeilinger state with linear optical elements. The parties in quantum communication can in principle obtain a maximally entangled state from each N-photon system with a success probability of 100%. That is, it does not consume the less-entangled photon systems largely, which is far different from other multipartite entanglement purification schemes. This feature maybe make this scheme more feasible in practical applications. -- Highlights: ► We proposed a deterministic entanglement purification scheme for GHZ states. ► The scheme uses only linear optical elements and has a success probability of 100%. ► The scheme gives a purified GHZ state in just one-step.
Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly
Mehdi Sharifi
2017-12-01
Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important.
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Langevin equation with the deterministic algebraically correlated noise
Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France); Srokowski, T. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)]|[Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
Stochastic differential equations with the deterministic, algebraically correlated noise are solved for a few model problems. The chaotic force with both exponential and algebraic temporal correlations is generated by the adjoined extended Sinai billiard with periodic boundary conditions. The correspondence between the autocorrelation function for the chaotic force and both the survival probability and the asymptotic energy distribution of escaping particles is found. (author). 58 refs.
Beeping a Deterministic Time-Optimal Leader Election
Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy
2018-01-01
The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...
Nodal deterministic simulation for problems of neutron shielding in multigroup formulation
Baptista, Josue Costa; Heringer, Juan Diego dos Santos; Santos, Luiz Fernando Trindade; Alves Filho, Hermes
2013-01-01
In this paper, we propose the use of some computational tools, with the implementation of numerical methods SGF (Spectral Green's Function), making use of a deterministic model of transport of neutral particles in the study and analysis of a known and simplified problem of nuclear engineering, known in the literature as a problem of neutron shielding, considering the model with two energy groups. These simulations are performed in MatLab platform, version 7.0, and are presented and developed with the help of a Computer Simulator providing a friendly computer application for their utilities
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Deterministic and stochastic models for middle east respiratory syndrome (MERS)
Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning
2018-03-01
World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
Influence of wind energy forecast in deterministic and probabilistic sizing of reserves
Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico
2010-07-01
One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)
Enns, M W; Larsen, D K; Cox, B J
2000-10-01
The observer-rated Hamilton depression scale (HamD) and the self-report Beck Depression Inventory (BDI) are among the most commonly used rating scales for depression, and both have well demonstrated reliability and validity. However, many depressed subjects have discrepant scores on these two assessment methods. The present study evaluated the ability of demographic, clinical and personality factors to account for the discrepancies observed between BDI and HamD ratings. The study group consisted of 94 SCID-diagnosed outpatients with a current major depressive disorder. Subjects were rated with the 21-item HamD and completed the BDI and the NEO-Five Factor Inventory. Younger age, higher educational attainment, and depressive subtype (atypical, non-melancholic) were predictive of higher BDI scores relative to HamD observer ratings. In addition, high neuroticism, low extraversion and low agreeableness were associated with higher endorsement of depressive symptoms on the BDI relative to the HamD. In general, these predictive variables showed a greater ability to explain discrepancies between self and observer ratings of psychological symptoms of depression compared to somatic symptoms of depression. The study does not determine which aspects of neuroticism and extraversion contribute to the observed BDI/HamD discrepancies. Depression ratings obtained with the BDI and HamD are frequently discordant and a number of patient characteristics robustly predict the discrepancy between these two rating methods. The value of multi-modal assessment in the conduct of research on depressive disorders is re-affirmed.
Evaluation of discrepancies between thermoluminescent dosimeter and direct-reading dosimeter results
Shaw, K.R.
1993-07-01
Currently at Oak Ridge National Laboratory (ORNL), the responses of thermoluminescent dosimeters (TLDs) and direct-reading dosimeters (DRDs) are not officially compared or the discrepancies investigated. However, both may soon be required due to the new US Department of Energy (DOE) Radiological Control Manual. In the past, unofficial comparisons of the two dosimeters have led to discrepancies of up to 200%. This work was conducted to determine the reasons behind such discrepancies. For tests conducted with the TLDs, the reported dose was most often lower than the delivered dose, while DRDs most often responded higher than the delivered dose. Trends were identified in personnel DRD readings, and ft was concluded that more training and more control of the DRDs could improve their response. TLD responses have already begun to be improved; a new background subtraction method was implemented in April 1993, and a new dose algorithm is being considered. It was concluded that the DOE Radiological Control Manual requirements are reasonable for identifying discrepancies between dosimeter types, and more stringent administrative limits might even be considered
Discrepancy detection in the retrieval-enhanced suggestibility paradigm.
Butler, Brendon Jerome; Loftus, Elizabeth F
2018-04-01
Retrieval-enhanced suggestibility (RES) refers to the finding that immediately recalling the details of a witnessed event can increase susceptibility to later misinformation. In three experiments, we sought to gain a deeper understanding of the role that retrieval plays in the RES paradigm. Consistent with past research, initial testing did increase susceptibility to misinformation - but only for those who failed to detect discrepancies between the original event and the post-event misinformation. In all three experiments, subjects who retrospectively detected discrepancies in the post-event narratives were more resistant to misinformation than those who did not. In Experiments 2 and 3, having subjects concurrently assess the consistency of the misinformation narratives negated the RES effect. Interestingly, in Experiments 2 and 3, subjects who had retrieval practice and detected discrepancies were more likely to endorse misinformation than control subjects who detected discrepancies. These results call attention to limiting conditions of the RES effect and highlight the complex relationship between retrieval practice, discrepancy detection, and misinformation endorsement.
Daniel Wittschieber
Full Text Available BACKGROUND: Autopsy rates in Western countries consistently decline to an average of <5%, although clinical autopsies represent a reasonable tool for quality control in hospitals, medically and economically. Comparing pre- and postmortal diagnoses, diagnostic discrepancies as uncovered by clinical autopsies supply crucial information on how to improve clinical treatment. The study aimed at analyzing current diagnostic discrepancy rates, investigating their influencing factors and identifying risk profiles of patients that could be affected by a diagnostic discrepancy. METHODS AND FINDINGS: Of all adult autopsy cases of the Charité Institute of Pathology from the years 1988, 1993, 1998, 2003 and 2008, the pre- and postmortal diagnoses and all demographic data were analyzed retrospectively. Based on power analysis, 1,800 cases were randomly selected to perform discrepancy classification (class I-VI according to modified Goldman criteria. The rate of discrepancies in major diagnoses (class I was 10.7% (95% CI: 7.7%-14.7% in 2008 representing a reduction by 15.1%. Subgroup analysis revealed several influencing factors to significantly correlate with the discrepancy rate. Cardiovascular diseases had the highest frequency among class-I-discrepancies. Comparing the 1988-data of East- and West-Berlin, no significant differences were found in diagnostic discrepancies despite an autopsy rate differing by nearly 50%. A risk profile analysis visualized by intuitive heatmaps revealed a significantly high discrepancy rate in patients treated in low or intermediate care units at community hospitals. In this collective, patients with genitourinary/renal or infectious diseases were at particularly high risk. CONCLUSIONS: This is the current largest and most comprehensive study on diagnostic discrepancies worldwide. Our well-powered analysis revealed a significant rate of class-I-discrepancies indicating that autopsies are still of value. The identified risk
Contribution of the deterministic approach to the characterization of seismic input
Panza, G.F.; Romanelli, F.; Vaccari, F.; Decanini, L.; Mollaioli, F.
1999-10-01
Traditional methods use either a deterministic or a probabilistic approach, based on empirically derived laws for ground motion attenuation. The realistic definition of seismic input can be performed by means of advanced modelling codes based on the modal summation technique. These codes and their extension to laterally heterogeneous structures allow us to accurately calculate synthetic signals, complete of body waves and of surface waves, corresponding to different source and anelastic structural models, taking into account the effect of local geological conditions. This deterministic approach is capable to address some aspects largely overlooked in the probabilistic approach: (a) the effect of crustal properties on attenuation are not neglected; (b) the ground motion parameters are derived from synthetic time histories. and not from overly simplified attenuation functions; (c) the resulting maps are in terms of design parameters directly, and do not require the adaptation of probabilistic maps to design ground motions; and (d) such maps address the issue of the deterministic definition of ground motion in a way which permits the generalization of design parameters to locations where there is little seismic history. The methodology has been applied to a large part of south-eastern Europe, in the framework of the EU-COPERNICUS project 'Quantitative Seismic Zoning of the Circum Pannonian Region'. Maps of various seismic hazard parameters numerically modelled, and whenever possible tested against observations, such as peak ground displacement, velocity and acceleration, of practical use for the design of earthquake-safe structures, have been produced. The results of a standard probabilistic approach are compared with the findings based on the deterministic approach. A good agreement is obtained except for the Vrancea (Romania) zone, where the attenuation relations used in the probabilistic approach seem to underestimate, mainly at large distances, the seismic hazard
Discrepancies between implicit and explicit motivation and unhealthy eating behavior.
Job, Veronika; Oertig, Daniela; Brandstätter, Veronika; Allemand, Mathias
2010-08-01
Many people change their eating behavior as a consequence of stress. One source of stress is intrapersonal psychological conflict as caused by discrepancies between implicit and explicit motives. In the present research, we examined whether eating behavior is related to this form of stress. Study 1 (N=53), a quasi-experimental study in the lab, showed that the interaction between the implicit achievement motive disposition and explicit commitment toward an achievement task significantly predicts the number of snacks consumed in a consecutive taste test. In cross-sectional Study 2 (N=100), with a sample of middle-aged women, overall motive discrepancy was significantly related to diverse indices of unsettled eating. Regression analyses revealed interaction effects specifically for power and achievement motivation and not for affiliation. Emotional distress further partially mediated the relationship between the overall motive discrepancy and eating behavior.
Impact of aldosterone-producing cell clusters on diagnostic discrepancies in primary aldosteronism
Kometani, Mitsuhiro; Yoneda, Takashi; Aono, Daisuke; Karashima, Shigehiro; Demura, Masashi; Nishimoto, Koshiro; Yamagishi, Masakazu; Takeda, Yoshiyu
2018-01-01
Adrenocorticotropic hormone (ACTH) stimulation is recommended in adrenal vein sampling (AVS) for primary aldosteronism (PA) to improve the AVS success rate. However, this method can confound the subtype diagnosis. Gene mutations or pathological characteristics may be related to lateralization by AVS. This study aimed to compare the rate of diagnostic discrepancy by AVS pre- versus post-ACTH stimulation and to investigate the relationship between this discrepancy and findings from immunohistochemical and genetic analyses of PA. We evaluated 195 cases of AVS performed in 2011–2017. All surgical specimens were analyzed genetically and immunohistochemically. Based on the criteria, AVS was successful in 158 patients both pre- and post-ACTH; of these patients, 75 showed diagnostic discrepancies between pre- and post-ACTH. Thus, 19 patients underwent unilateral adrenalectomy, of whom 16 had an aldosterone-producing adenoma (APA) that was positive for CYP11B2 immunostaining. Of them, 10 patients had discordant lateralization between pre- and post-ACTH. In the genetic analysis, the rate of somatic mutations was not significantly different between APA patients with versus without a diagnostic discrepancy. In the immunohistochemical analysis, CYP11B2 levels and the frequency of aldosterone-producing cell clusters (APCCs) in APAs were almost identical between patients with versus without a diagnostic discrepancy. However, both the number and summed area of APCCs in APAs were significantly smaller in patients with concordant results than in those whose diagnosis changed to bilateral PA post-ACTH stimulation. In conclusion, lateralization by AVS was affected by APCCs in the adjacent gland, but not by APA-related factors such as somatic gene mutations. PMID:29899838
Reidy, Dennis E.; Smith-Darden, Joanne P.; Cortina, Kai S.; Kernsmith, Roger M.; Kernsmith, Poco D.
2018-01-01
Purpose Addressing gender norms is integral to understanding and ultimately preventing violence in both adolescent and adult intimate relationships. Males are affected by gender role expectations which require them to demonstrate attributes of strength, toughness, and dominance. Discrepancy stress is a form of gender role stress that occurs when boys and men fail to live up to the traditional gender norms set by society. Failure to live up to these gender role expectations may precipitate this experience of psychological distress in some males which, in turn, may increase the risk to engage in physically and sexually violent behaviors as a means of demonstrating masculinity. Methods Five-hundred eighty-nine adolescent males from schools in Wayne County, Michigan completed a survey assessing self-perceptions of gender role discrepancy, the experience of discrepancy stress, and history of physical and sexual dating violence. Results Logistic regression analyses indicated boys who endorsed gender role discrepancy and associated discrepancy stress were generally at greater risk to engage in acts of sexual violence but not necessarily physical violence. Conclusions Boys who experience stress about being perceived as “sub-masculine” may be more likely to engage in sexual violence as a means of demonstrating their masculinity to self and/or others and thwarting potential “threats” to their masculinity by dating partners. Efforts to prevent sexual violence perpetration among male adolescents should perhaps consider the influence of gender socialization in this population and include efforts to reduce distress about masculine socialization in primary prevention strategies. PMID:26003576
Dietary restraint and self-discrepancy in male university students.
Orellana, Ligia; Grunert, Klaus G; Sepúlveda, José; Lobos, Germán; Denegri, Marianela; Miranda, Horacio; Adasme-Berríos, Cristian; Mora, Marcos; Etchebarne, Soledad; Salinas-Oñate, Natalia; Schnettler, Berta
2016-04-01
Self-discrepancy describes the distance between an ideal and the actual self. Research suggests that self-discrepancy and dietary restraint are related, causing a significant impact on the person's well-being. However, this relationship has been mostly reported in female and mixed populations. In order to further explore dietary behaviors and their relations to self-discrepancy and well-being-related variables in men, a survey was applied to a non-probabilistic sample of 119 male students from five Chilean state universities (mean age=21.8, SD=2.75). The questionnaire included the Revised Restraint Scale (RRS) with the subscales weight fluctuations (WF) and diet concern (DC), the Satisfaction with Life Scale (SWLS), the Satisfaction with Food-Related Life Scale (SWFL), the Nutrition Interest Scale (NIS), and the Self-discrepancy Index (SDI). Questions were asked about socio-demographic characteristics, eating and drinking habits, and approximate weight and height. A cluster analysis applied to the Z-scores of the RRS classified the following typologies: Group 1 (22.7%), men concerned about weight fluctuations; Group 2 (37.0%), men concerned about diet and weight fluctuations; Group 3 (40.3%), unconcerned about diet and weight fluctuations. The typologies differed in their SDI score, restriction on pastry consumption and reported body mass index (BMI). Students with higher DC and WF scores had a higher BMI, and tended to report high self-discrepancy not only on a physical level, but also on social, emotional, economic and personal levels. This study contributes to the literature on subjective well-being, dietary restraint and self-discrepancy in men from non-clinical samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Resolving taxonmic discrepancies: Role of Electronic Catalogues of Known Organisms
Vishwas Chavan
2005-01-01
Full Text Available There is a disparity in availability of nomenclature change literature to the taxonomists of the developing world and availability of taxonomic papers published by developing world scientists to their counterparts in developed part of the globe. This has resulted in several discrepancies in the naming of organisms. Development of electronic catalogues of names of known organisms would help in pointing out these issues. We have attempted to highlight a few of such discrepancies found while developing IndFauna, an electronic catalogue of known Indian fauna and comparing it with existing global and regional databases.Full Text: PDF
Deterministic simulation of first-order scattering in virtual X-ray imaging
Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D
2004-07-01
A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.
Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.
2002-01-01
In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)
Hilbig, Benjamin E; Moshagen, Morten
2014-12-01
Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.
Elkhoraibi, T.; Hashemi, A.; Ostadan, F.
2014-01-01
input motions obtained from Probabilistic Seismic Hazard Analysis (PSHA) and the site response analysis is conducted with simulated soil profiles and accompanying soil nonlinearity curves. The deterministic approach utilizes three strain-compatible soil profiles (Lower Bound (LB), Best Estimate (BE) and Upper Bound (UB)) determined based on the variation of strain-compatible soil profiles obtained from the probabilistic site response analysis and uses SSI analysis to determine a conservative estimate of the required response as the envelope of the SSI results from LB, BE and UB soil cases. In contrast, the probabilistic SSI analysis propagates the uncertainty in the soil and structural properties and provides rigorous estimates for the statistical distribution of the response parameters of interest. The engineering demand parameters considered are the story drifts and ISRS at key locations in the example structure. The results from the deterministic and probabilistic approaches, with and without ground motion incoherency effects, are compared and discussed. Recommendations are made regarding the efficient use of statistical methods in probabilistic SSI analysis and the use of such results in Integrated Soil-Structure Fragility Analysis (ISSFA) and performance-based design
Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.
2014-04-01
input motions obtained from Probabilistic Seismic Hazard Analysis (PSHA) and the site response analysis is conducted with simulated soil profiles and accompanying soil nonlinearity curves. The deterministic approach utilizes three strain-compatible soil profiles (Lower Bound (LB), Best Estimate (BE) and Upper Bound (UB)) determined based on the variation of strain-compatible soil profiles obtained from the probabilistic site response analysis and uses SSI analysis to determine a conservative estimate of the required response as the envelope of the SSI results from LB, BE and UB soil cases. In contrast, the probabilistic SSI analysis propagates the uncertainty in the soil and structural properties and provides rigorous estimates for the statistical distribution of the response parameters of interest. The engineering demand parameters considered are the story drifts and ISRS at key locations in the example structure. The results from the deterministic and probabilistic approaches, with and without ground motion incoherency effects, are compared and discussed. Recommendations are made regarding the efficient use of statistical methods in probabilistic SSI analysis and the use of such results in Integrated Soil-Structure Fragility Analysis (ISSFA) and performance-based design.
Automated Controller Synthesis for non-Deterministic Piecewise-Affine Hybrid Systems
Grunnet, Jacob Deleuran
formations. This thesis uses a hybrid systems model of a satellite formation with possible actuator faults as a motivating example for developing an automated control synthesis method for non-deterministic piecewise-affine hybrid systems (PAHS). The method does not only open an avenue for further research...... in fault tolerant satellite formation control, but can be used to synthesise controllers for a wide range of systems where external events can alter the system dynamics. The synthesis method relies on abstracting the hybrid system into a discrete game, finding a winning strategy for the game meeting...... game and linear optimisation solvers for controller refinement. To illustrate the efficacy of the method a reoccurring satellite formation example including actuator faults has been used. The end result is the application of PAHSCTRL on the example showing synthesis and simulation of a fault tolerant...
Deterministic quantum state transfer and remote entanglement using microwave photons.
Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A
2018-06-01
Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.
Deterministic and stochastic transport theories for the analysis of complex nuclear systems
Giffard, F.X.
2000-01-01
In the field of reactor and fuel cycle physics, particle transport plays an important role. Neutronic design, operation and evaluation calculations of nuclear systems make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very sensitive to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
MIMO capacity for deterministic channel models: sublinear growth
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
. In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....
Deterministic Single-Photon Source for Distributed Quantum Networking
Kuhn, Axel; Hennrich, Markus; Rempe, Gerhard
2002-01-01
A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing
On the progress towards probabilistic basis for deterministic codes
Ellyin, F.
1975-01-01
Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences
Enhanced deterministic phase retrieval using a partially developed speckle field
Almoro, Percival F.; Waller, Laura; Agour, Mostafa
2012-01-01
A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.
2005-01-01
Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)
Morillon B.
2013-03-01
Full Text Available JEFF-3.1.1 is the reference nuclear data library in CEA for the design calculations of the next nuclear power plants. The validation of the new neutronics code systems is based on this library and changes in nuclear data should be looked at closely. Some new actinides evaluation files at high energies have been proposed by CEA/Bruyères-le-Chatel in 2009 and have been integrated in JEFF3.2T1 test release. For the new release JEFF-3.2, CEA will build new evaluation files for the actinides, which should be a combination of the new evaluated data coming from BRC-2009 in the high energy range and improvements or new evaluations in the resolved and unresolved resonance range from CEA-Cadarache. To prepare the building of these new files, benchmarking the BRC-2009 library in comparison with the JEFF-3.1.1 library was very important. The crucial points to evaluate were the improvements in the continuum range and the discrepancies in the resonance range. The present work presents for a selected set of benchmarks the discrepancies in the effective multiplication factor obtained while using the JEFF-3.1.1 or JEFF-3.2T1 library with the deterministic code package ERANOS/PARIS and the stochastic code TRIPOLI-4. They have both been used to calculate cross section perturbations or other nuclear data perturbations when possible. This has permittted to identify the origin of the discrepancies in reactivity calculations. In addition, this work also shows the importance of cross section processing validation. Actually, some fast neutron spectrum calculations have led to opposite tendancies between the deterministic code package and the stochastic code. Some particular nuclear data (MT=5 in ENDF terminology seem to be incompatible with the current MERGE or GECCO processing codes.
Comparative processes in personal and group judgments : Resolving the discrepancy
Postmes, T; Branscombe, NR; Spears, R; Young, H
The judgment mechanisms underlying personal- and group-level ratings of discrimination and privilege were investigated in high- and lour-status groups. PI consistent personal-group discrepancy is found for discrimination and privilege: but is not due to personal differentiation from the group.
Prevalence of Gender Discrepancy in Internet Use in Nigeria ...
One important agent of empowerment is information, provided with dispatch through the Internet. In essence, the research sought to determine the prevalence of gender discrepancies in Internet use with a view to indicating its implication to women empowerment. In the survey, cluster and proportionate sampling techniques ...
Outcome discrepancies and selective reporting: impacting the leading journals?
Fleming, Padhraig S; Koletsi, Despina; Dwan, Kerry; Pandis, Nikolaos
2015-01-01
Selective outcome reporting of either interesting or positive research findings is problematic, running the risk of poorly-informed treatment decisions. We aimed to assess the extent of outcome and other discrepancies and possible selective reporting between registry entries and published reports among leading medical journals. Randomized controlled trials published over a 6-month period from July to December 31st, 2013, were identified in five high impact medical journals: The Lancet, British Medical Journal, New England Journal of Medicine, Annals of Internal Medicine and Journal of American Medical Association were obtained. Discrepancies between published studies and registry entries were identified and related to factors including registration timing, source of funding and presence of statistically significant results. Over the 6-month period, 137 RCTs were found. Of these, 18% (n = 25) had discrepancies related to primary outcomes with the primary outcome changed in 15% (n = 20). Moreover, differences relating to non-primary outcomes were found in 64% (n = 87) with both omission of pre-specified non-primary outcomes (39%) and introduction of new non-primary outcomes (44%) common. No relationship between primary or non-primary outcome change and registration timing (prospective or retrospective; P = 0.11), source of funding (P = 0.92) and presence of statistically significant results (P = 0.92) was found. Discrepancies between registry entries and published articles for primary and non-primary outcomes were common among trials published in leading medical journals. Novel approaches are required to address this problem.
Discrepancies between Parents' and Children's Attitudes toward TV Advertising
Baiocco, Roberto; D'Alessio, Maria; Laghi, Fiorenzo
2009-01-01
The authors conducted a study with 500 parent-child dyads. The sample comprised 254 boys and 246 girls. The children were grouped into 5 age groups (1 group for each age from 7 to 11 years), with each group comprising 100 children. The survey regards discrepancies between children and their parents on attitudes toward TV advertising to determine…
Error and discrepancy in radiology: inevitable or avoidable?
Brady, Adrian P
2017-02-01
Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.
The Cepheid mass discrepancy and pulsation-driven mass loss
Neilson, H.R.; Cantiello, M.; Langer, N.
2011-01-01
Context. A longstanding challenge for understanding classical Cepheids is the Cepheid mass discrepancy, where theoretical mass estimates using stellar evolution and stellar pulsation calculations have been found to differ by approximately 10−20%. Aims. We study the role of pulsation-driven mass loss
Man enough? Masculine discrepancy stress and intimate partner violence.
Reidy, Dennis E; Berke, Danielle S; Gentile, Brittany; Zeichner, Amos
2014-10-01
Research on gender roles suggests that men who strongly adhere to traditional masculine gender norms are at increased risk for the perpetration of violent and abusive acts toward their female intimate partners. Yet, gender norms alone fail to provide a comprehensive explanation of the multifaceted construct of intimate partner violence (IPV) and there is theoretical reason to suspect that men who fail to conform to masculine roles may equally be at risk for IPV. In the present study, we assessed effect of masculine discrepancy stress , a form of distress arising from perceived failure to conform to socially-prescribed masculine gender role norms, on IPV. Six-hundred men completed online surveys assessing their experience of discrepancy stress, masculine gender role norms, and history of IPV. Results indicated that masculine discrepancy stress significantly predicted men's historical perpetration of IPV independent of other masculinity related variables. Findings are discussed in terms of potential distress engendered by masculine socialization as well as putative implications of gender role discrepancy stress for understanding and intervening in partner violence perpetrated by men.
Discrepancy and Disliking Do Not Induce Negative Opinion Shifts
Takács, Károly; Flache, Andreas; Maes, Michael
2016-01-01
Both classical social psychological theories and recent formal models of opinion differentiation and bi-polarization assign a prominent role to negative social influence. Negative influence is defined as shifts away from the opinion of others and hypothesized to be induced by discrepancy with or
Man enough? Masculine discrepancy stress and intimate partner violence☆
Reidy, Dennis E.; Berke, Danielle S.; Gentile, Brittany; Zeichner, Amos
2018-01-01
Research on gender roles suggests that men who strongly adhere to traditional masculine gender norms are at increased risk for the perpetration of violent and abusive acts toward their female intimate partners. Yet, gender norms alone fail to provide a comprehensive explanation of the multifaceted construct of intimate partner violence (IPV) and there is theoretical reason to suspect that men who fail to conform to masculine roles may equally be at risk for IPV. In the present study, we assessed effect of masculine discrepancy stress, a form of distress arising from perceived failure to conform to socially-prescribed masculine gender role norms, on IPV. Six-hundred men completed online surveys assessing their experience of discrepancy stress, masculine gender role norms, and history of IPV. Results indicated that masculine discrepancy stress significantly predicted men’s historical perpetration of IPV independent of other masculinity related variables. Findings are discussed in terms of potential distress engendered by masculine socialization as well as putative implications of gender role discrepancy stress for understanding and intervening in partner violence perpetrated by men. PMID:29593368
Gender Discrepancies and Victimization of Students with Disabilities
Simpson, Cynthia G.; Rose, Chad A.; Ellis, Stephanie K.
2016-01-01
Students with disabilities have been recognized as disproportionately involved within the bullying dynamic. However, few studies have examined the interaction between disability status, gender, and grade level. The current study explored the gender discrepancies among students with and without disabilities in middle and high school on bullying,…
Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano
2012-05-10
The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.
Binarity and the Abundance Discrepancy Problem in Planetary Nebulae
Corradi, Romano L. M.; García-Rojas, Jorge; Jones, David; Rodríguez-Gil, Pablo
2015-04-01
The discrepancy between abundances computed using optical recombination lines and collisionally excited lines is a major unresolved problem in nebular astrophysics. Here, we show that the largest abundance discrepancies are reached in planetary nebulae with close binary central stars. We illustrate this using deep spectroscopy of three nebulae with a post common-envelope (CE) binary star. Abell 46 and Ou 5 have O2+/H+ abundance discrepancy factors larger than 50, and as high as 300 in the inner regions of Abell 46. Abell 63 has a smaller discrepancy factor around 10, which is still above the typical values in ionized nebulae. Our spectroscopic analysis supports previous conclusions that, in addition to “standard” hot ({{T}e} ˜ 104 K) gas, there exists a colder ({{T}e} ˜ 103 K), ionized component that is highly enriched in heavy elements. These nebulae have low ionized masses, between 10-3 and 10-1 M⊙ depending on the adopted electron densities and temperatures. Since the much more massive red giant envelope is expected to be entirely ejected in the CE phase, the currently observed nebulae would be produced much later, during post-CE mass loss episodes when the envelope has already dispersed. These observations add constraints to the abundance discrepancy problem. We revise possible explanations. Some explanations are naturally linked to binarity such as, for instance, high-metallicity nova ejecta, but it is difficult at this stage to depict an evolutionary scenario consistent with all of the observed properties. We also introduce the hypothesis that these nebulae are the result of tidal destruction, accretion, and ejection of Jupiter-like planets.
Galisteo-López, Juan F.
2017-02-01
Controlling the emission of a light source demands acting on its local photonic environment via the local density of states (LDOS). Approaches to exert such control on large scale samples, commonly relying on self-assembly methods, usually lack from a precise positioning of the emitter within the material. Alternatively expensive and time consuming techniques can be used to produce samples of small dimensions where a deterministic control on emitter position can be achieved. In this work we present a full solution process approach to fabricate photonic architectures containing nano-emitters which position can be controlled with nanometer precision over squared milimiter regions. By a combination of spin and dip coating we fabricate one-dimensional (1D) nanoporous photonic crystals, which potential in different fields such as photovoltaics or sensing has been previously reported, containing monolayers of luminescent polymeric nanospheres. We demonstrate how, by modifying the position of the emitters within the photonic crystal, their emission properties (photoluminescence intensity and angular distribution) can be deterministically modified. Further, the nano-emitters can be used as a probe to study the LDOS distribution within these systems with a spatial resolution of 25 nm (provided by the probe size) carrying out macroscopic measurements over squared milimiter regions. Routes to enhance light-matter interaction in this kind of systems by combining them with metallic surfaces are finally discussed.
Petrus Zacharias; Abdul Jami
2010-01-01
Researches conducted by Batan's researchers have resulted in a number competences that can be used to produce goods and services, which will be applied to industrial sector. However, there are difficulties how to convey and utilize the R and D products into industrial sector. Evaluation results show that each research result should be completed with techno-economy analysis to obtain the feasibility of a product for industry. Further analysis on multy-product concept, in which one business can produce many main products, will be done. For this purpose, a software package simulating techno-economy I economic feasibility which uses deterministic and stochastic data (Monte Carlo method) was been carried out for multi-product including side product. The programming language used in Visual Basic Studio Net 2003 and SQL as data base processing software. This software applied sensitivity test to identify which investment criteria is sensitive for the prospective businesses. Performance test (trial test) has been conducted and the results are in line with the design requirement, such as investment feasibility and sensitivity displayed deterministically and stochastically. These result can be interpreted very well to support business decision. Validation has been performed using Microsoft Excel (for single product). The result of the trial test and validation show that this package is suitable for demands and is ready for use. (author)
Charge sharing in multi-electrode devices for deterministic doping studied by IBIC
Jong, L.M.; Newnham, J.N.; Yang, C.; Van Donkelaar, J.A.; Hudson, F.E.; Dzurak, A.S.; Jamieson, D.N.
2011-01-01
Following a single ion strike in a semiconductor device the induced charge distribution changes rapidly with time and space. This phenomenon has important applications to the sensing of ionizing radiation with applications as diverse as deterministic doping in semiconductor devices to radiation dosimetry. We have developed a new method for the investigation of this phenomenon by using a nuclear microprobe and the technique of Ion Beam Induced Charge (IBIC) applied to a specially configured sub-100 μm scale silicon device fitted with two independent surface electrodes coupled to independent data acquisition systems. The separation between the electrodes is comparable to the range of the 2 MeV He ions used in our experiments. This system allows us to integrate the total charge induced in the device by summing the signals from the independent electrodes and to measure the sharing of charge between the electrodes as a function of the ion strike location as a nuclear microprobe beam is scanned over the sensitive region of the device. It was found that for a given ion strike location the charge sharing between the electrodes allowed the beam-strike location to be determined to higher precision than the probe resolution. This result has potential application to the development of a deterministic doping technique where counted ion implantation is used to fabricate devices that exploit the quantum mechanical attributes of the implanted ions.
Nidheesh, N; Abdul Nazeer, K A; Ameer, P M
2017-12-01
Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Strongly Deterministic Population Dynamics in Closed Microbial Communities
Zak Frentz
2015-10-01
Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Stochastic and deterministic causes of streamer branching in liquid dielectrics
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-01-01
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
A study of deterministic models for quantum mechanics
Sutherland, R.
1980-01-01
A theoretical investigation is made into the difficulties encountered in constructing a deterministic model for quantum mechanics and into the restrictions that can be placed on the form of such a model. The various implications of the known impossibility proofs are examined. A possible explanation for the non-locality required by Bell's proof is suggested in terms of backward-in-time causality. The efficacy of the Kochen and Specker proof is brought into doubt by showing that there is a possible way of avoiding its implications in the only known physically realizable situation to which it applies. A new thought experiment is put forward to show that a particle's predetermined momentum and energy values cannot satisfy the laws of momentum and energy conservation without conflicting with the predictions of quantum mechanics. Attention is paid to a class of deterministic models for which the individual outcomes of measurements are not dependent on hidden variables associated with the measuring apparatus and for which the hidden variables of a particle do not need to be randomized after each measurement
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
Predictors of the discrepancy between objective and subjective cognition in bipolar disorder
Miskowiak, K. W.; Petersen, Jeff Zarp; Ott, C. V.
2016-01-01
OBJECTIVE: The poor relationship between subjective and objective cognitive impairment in bipolar disorder (BD) is well-established. However, beyond simple correlation, this has not been explored further using a methodology that quantifies the degree and direction of the discrepancy. This study...... aimed to develop such a methodology to explore clinical characteristics predictive of subjective-objective discrepancy in a large BD patient cohort. METHODS: Data from 109 remitted BD patients and 110 healthy controls were pooled from previous studies, including neuropsychological test scores, self......-reported cognitive difficulties, and ratings of mood, stress, socio-occupational capacity, and quality of life. Cognitive symptom 'sensitivity' scores were calculated using a novel methodology, with positive scores reflecting disproportionately more subjective complaints than objective impairment and negative values...
Discrepâncias na imagem corporal e na dieta de obesos Self-discrepancy in body image and diet
Patrícia Kanno
2008-08-01
Full Text Available OBJETIVO: Este estudo avaliou a discrepância entre a imagem real e a imagem ideal de indivíduos obesos e procurou relacionar possíveis alterações no comportamento alimentar na busca desse corpo ideal. MÉTODOS: A amostra foi composta por 25 sujeitos, sendo 76% do sexo feminino e idade média de 39,24 (desviopadrão=5,01 anos. Dois instrumentos foram utilizados: a Escala de Aparência Física, cuja análise fatorial extraiu um único fator "Aparência Física" com precisão de α=0,74 para mulheres e α=0,73 para homens e o Questionário de Prioridade Alimentar elaborado para agrupar itens nas categorias que compõem a Pirâmide Alimentar. Foram realizados testes t pareados para comparar diferenças entre as imagens real e ideal e para comparar os comportamentos alimentares real e ideal. RESULTADOS: Os resultados apresentaram diferenças entre as imagens ideal e real, sendo a primeira representada de forma mais positiva que a última. Com relação ao comportamento alimentar, os resultados demonstraram diminuição no consumo de carnes e café preto e aumento no consumo de frutas e vegetais para atingir o corpo ideal. Porém, a amostra não mudaria os seus hábitos em relação ao consumo de cereais, laticínios, óleos e gorduras, doces e refrigerantes. CONCLUSÃO: Embora os resultados apresentem diferenças na percepção da imagem corporal, a amostra não mudaria os seus hábitos em relação às categorias do topo da pirâmide alimentar.OBJECTIVE: This study evaluated the discrepancy between actual and ideal body images of obese individuals and listed changes in the dietary behaviors used to achieve this ideal body image. METHODS: The sample was composed of 25 obese individuals with a mean age of 39.24 years (standard deviation=5.01 where 76% were females. Two instruments were used for this end: the Physical Appearance Scale whose factorial analysis extracted a single factor "Physical Appearance" with an accuracy of α=0.74 for women and
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were
Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry
Tagziria, H.
2000-02-01
Modelling a physical system can be carried out either stochastically or deterministically. An example of the former method is the Monte Carlo technique, in which statistically approximate methods are applied to exact models. No transport equation is solved as individual particles are simulated and some specific aspect (tally) of their average behaviour is recorded. The average behaviour of the physical system is then inferred using the central limit theorem. In contrast, deterministic codes use mathematically exact methods that are applied to approximate models to solve the transport equation for the average particle behaviour. The physical system is subdivided in boxes in the phase-space system and particles are followed from one box to the next. The smaller the boxes the better the approximations become. Although the Monte Carlo method has been used for centuries, its more recent manifestation has really emerged from the Manhattan project of the Word War II. Its invention is thought to be mainly due to Metropolis, Ulah (through his interest in poker), Fermi, von Neuman and Richtmeyer. Over the last 20 years or so, the Monte Carlo technique has become a powerful tool in radiation transport. This is due to users taking full advantage of richer cross section data, more powerful computers and Monte Carlo techniques for radiation transport, with high quality physics and better known source spectra. This method is a common sense approach to radiation transport and its success and popularity is quite often also due to necessity, because measurements are not always possible or affordable. In the Monte Carlo method, which is inherently realistic because nature is statistical, a more detailed physics is made possible by isolation of events while rather elaborate geometries can be modelled. Provided that the physics is correct, a simulation is exactly analogous to an experimenter counting particles. In contrast to the deterministic approach, however, a disadvantage of the
Thway, Khin; Wang, Jayson; Mubako, Taka; Fisher, Cyril
2014-01-01
Introduction. Soft tissue tumour pathology is a highly specialised area of surgical pathology, but soft tissue neoplasms can occur at virtually all sites and are therefore encountered by a wide population of surgical pathologists. Potential sarcomas require referral to specialist centres for review by pathologists who see a large number of soft tissue lesions and where appropriate ancillary investigations can be performed. We have previously assessed the types of diagnostic discrepancies between referring and final diagnosis for soft tissue lesions referred to our tertiary centre. We now reaudit this 6 years later, assessing changes in discrepancy patterns, particularly in relation to the now widespread use of ancillary molecular diagnostic techniques which were not prevalent in our original study. Materials and Methods. We compared the sarcoma unit's histopathology reports with referring reports on 348 specimens from 286 patients with suspected or proven soft tissue tumours in a one-year period. Results. Diagnostic agreement was seen in 250 cases (71.8%), with 57 (16.4%) major and 41 (11.8%) minor discrepancies. There were 23 cases of benign/malignant discrepancies (23.5% of all discrepancies). 50 ancillary molecular tests were performed, 33 for aiding diagnosis and 17 mutational analyses for gastrointestinal stromal tumour to guide therapy. Findings from ancillary techniques contributed to 3 major and 4 minor discrepancies. While the results were broadly similar to those of the previous study, there was an increase in frequency of major discrepancies. Conclusion. Six years following our previous study and notably now in an era of widespread ancillary molecular diagnosis, the overall discrepancy rate between referral and tertiary centre diagnosis remains similar, but there is an increase in frequency of major discrepancies likely to alter patient management. A possible reason for the increase in major discrepancies is the increasing lack of exposure to soft tissue
Nagaraja, S.; Ullah, Q.; Lee, K.J.; Bickle, I.; Hon, L.Q.; Griffiths, P.D.; Raghavan, A.; Flynn, P.; Connolly, D.J.A.
2009-01-01
Aim: To evaluate the discrepancy rate among specialist registrars (SPR) to assess whether seniority had a bearing on the discrepancy rate. To investigate which were the commonly missed abnormalities and the consequences for teaching purposes. To investigate the role of a specialist consultant neuroradiologist in reporting paediatric head computed tomography examinations. Materials and methods: The study was carried out over a 9-month period at the regional paediatric hospital during which time 270 CT head examinations were reported. Reporting in the department is carried out by one of the five general paediatric radiologists (GR) and also a specialist paediatric neuroradiologist (NR). The NR was considered the reference standard, who corroborated in areas of discrepancy with a second senior NR for this study. Of the 270 examinations, 260 were reported by the paediatric NR, 160 were reported by the SPR, GR, and NR, and 51 were reported by an SPR and the NR. In addition, four were reported by the GR and the NR, 45 by the NR only, seven by the GR only, and three cases were reported by the GR and an SPR. The discrepancy rates were calculated for GR versus NR, and SPR versus NR. All the discrepancies were re-evaluated by a second senior NR and confirmed in all cases. The reports of the SPR were further scrutinized. The trainees of training years 1-3 were considered junior and 4-5 were considered senior. Results: There was a discrepancy in 26/164 cases (15.9%) reported by the GR and NR. There was a discrepancy in 59/211 cases (28%) reported by an SPR and NR. The chi-squared test (two-sided) showed a significant difference (p = 0.005) between the two groups. There was a discrepancy in 36/118 cases (30.5%) reported by the junior SPR and NR. There was a discrepancy in 23/93 cases (24.7%) reported by a senior SPR and NR. The chi-squared test (two-sided) showed a non-significant difference (p = 0.353) between the two groups. Conclusion: The performance of the SPR was
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)
2014-01-01
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References
Evaluation of the risk associated with the storage of radioactive wastes. The deterministic approach
Lewi, J.
1988-07-01
Radioactive waste storage facility safety depends on a certain number of barriers being placed between the waste and man. These barriers, certain of which are articial (the waste package and engineered barriers) and others are natural (geological formations), are of characteristics suited to the type of storage facility (surface storage or storage in deep geological formations). The combination of these different barriers provide protection for man, under all circumstances considered plausible. Justification, for the storage of given quantities of radionuclides, of the choice of the site, the artificial barriers and the overall storage architecture, is obtained by evaluation of the risk. It being this which provides a basis for determining the acceptability of the storage facility. One of the following two methods is normally used for evaluation of the risk: the deterministic method and the probabilistic method. This adress describes the deterministic method. This method is employed in France for the safety analysis of the projects and works of ANDRA, the national agency responsible for the management of radioactive waste. It should be remembered that in France, the La Manche surface storage centre for low and medium activity waste has been in existence since 1969, close to the reprocessing plant at La Hague and a second surface storage centre is to be commissioned around 1991 at Soulaines in centre of France (departement de l'Aube). Furthermore, geological surveying of four sites located in geological formations consisting of granite, schist, clay and salt were begun in 1987 for the selection in about three years time of a site for the creation of an underground laboratory. This could later be transformed, if safety is demonstrated, into a deep storage centre
Sakai, Kenshi; Upadhyaya, Shrinivasa K; Andrade-Sanchez, Pedro; Sviridova, Nina V
2017-03-01
Real-world processes are often combinations of deterministic and stochastic processes. Soil failure observed during farm tillage is one example of this phenomenon. In this paper, we investigated the nonlinear features of soil failure patterns in a farm tillage process. We demonstrate emerging determinism in soil failure patterns from stochastic processes under specific soil conditions. We normalized the deterministic nonlinear prediction considering autocorrelation and propose it as a robust way of extracting a nonlinear dynamical system from noise contaminated motion. Soil is a typical granular material. The results obtained here are expected to be applicable to granular materials in general. From a global scale to nano scale, the granular material is featured in seismology, geotechnology, soil mechanics, and particle technology. The results and discussions presented here are applicable in these wide research areas. The proposed method and our findings are useful with respect to the application of nonlinear dynamics to investigate complex motions generated from granular materials.
Scott Ferrenberg
2016-10-01
Full Text Available Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species and belowground (species active in organic and mineral soil layers arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community and modified Winkler funnels (belowground community and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the
Martinez, Alexander S.; Faist, Akasha M.
2016-01-01
Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod
On the implementation of a deterministic secure coding protocol using polarization entangled photons
Ostermeyer, Martin; Walenta, Nino
2007-01-01
We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
Kotiluoto, P.
2007-05-01
A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications. (orig.)
Sensitivity analysis of the titan hybrid deterministic transport code for SPECT simulation
Royston, Katherine K.; Haghighat, Alireza
2011-01-01
Single photon emission computed tomography (SPECT) has been traditionally simulated using Monte Carlo methods. The TITAN code is a hybrid deterministic transport code that has recently been applied to the simulation of a SPECT myocardial perfusion study. For modeling SPECT, the TITAN code uses a discrete ordinates method in the phantom region and a combined simplified ray-tracing algorithm with a fictitious angular quadrature technique to simulate the collimator and generate projection images. In this paper, we compare the results of an experiment with a physical phantom with predictions from the MCNP5 and TITAN codes. While the results of the two codes are in good agreement, they differ from the experimental data by ∼ 21%. In order to understand these large differences, we conduct a sensitivity study by examining the effect of different parameters including heart size, collimator position, collimator simulation parameter, and number of energy groups. (author)
Yang, Y M; Bush, K; Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)
2016-06-15
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high
Yang, Y M; Bush, K; Han, B; Xing, L
2016-01-01
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high
Kang, Dong Gu, E-mail: littlewing@kins.re.kr [Korea Institute of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)
2014-08-15
Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.
Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr
2009-08-07
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy
Discrepancy and Disliking Do Not Induce Negative Opinion Shifts
Flache, Andreas; Mäs, Michael
2016-01-01
Both classical social psychological theories and recent formal models of opinion differentiation and bi-polarization assign a prominent role to negative social influence. Negative influence is defined as shifts away from the opinion of others and hypothesized to be induced by discrepancy with or disliking of the source of influence. There is strong empirical support for the presence of positive social influence (a shift towards the opinion of others), but evidence that large opinion differences or disliking could trigger negative shifts is mixed. We examine positive and negative influence with controlled exposure to opinions of other individuals in one experiment and with opinion exchange in another study. Results confirm that similarities induce attraction, but results do not support that discrepancy or disliking entails negative influence. Instead, our findings suggest a robust positive linear relationship between opinion distance and opinion shifts. PMID:27333160
Numerical discrepancy between serial and MPI parallel computations
Sang Bong Lee
2016-09-01
Full Text Available Numerical simulations of 1D Burgers equation and 2D sloshing problem were carried out to study numerical discrepancy between serial and parallel computations. The numerical domain was decomposed into 2 and 4 subdomains for parallel computations with message passing interface. The numerical solution of Burgers equation disclosed that fully explicit boundary conditions used on subdomains of parallel computation was responsible for the numerical discrepancy of transient solution between serial and parallel computations. Two dimensional sloshing problems in a rectangular domain were solved using OpenFOAM. After a lapse of initial transient time sloshing patterns of water were significantly different in serial and parallel computations although the same numerical conditions were given. Based on the histograms of pressure measured at two points near the wall the statistical characteristics of numerical solution was not affected by the number of subdomains as much as the transient solution was dependent on the number of subdomains.
Overnight shift work: factors contributing to diagnostic discrepancies.
Hanna, Tarek N; Loehfelm, Thomas; Khosa, Faisal; Rohatgi, Saurabh; Johnson, Jamlik-Omari
2016-02-01
The aims of the study are to identify factors contributing to preliminary interpretive discrepancies on overnight radiology resident shifts and apply this data in the context of known literature to draw parallels to attending overnight shift work schedules. Residents in one university-based training program provided preliminary interpretations of 18,488 overnight (11 pm–8 am) studies at a level 1 trauma center between July 1, 2013 and December 31, 2014. As part of their normal workflow and feedback, attendings scored the reports as major discrepancy, minor discrepancy, agree, and agree--good job. We retrospectively obtained the preliminary interpretation scores for each study. Total relative value units (RVUs) per shift were calculated as an indicator of overnight workload. The dataset was supplemented with information on trainee level, number of consecutive nights on night float, hour, modality, and per-shift RVU. The data were analyzed with proportional logistic regression and Fisher's exact test. There were 233 major discrepancies (1.26 %). Trainee level (senior vs. junior residents; 1.08 vs. 1.38 %; p performance. Increased workload affected more junior residents' performance, with R3 residents performing significantly worse on busier nights. Hour of the night was not significantly associated with performance, but there was a trend toward best performance at 2 am, with subsequent decreased accuracy throughout the remaining shift hours. Improved performance occurred after the first six night float shifts, presumably as residents acclimated to a night schedule. As overnight shift work schedules increase in popularity for residents and attendings, focused attention to factors impacting interpretative accuracy is warranted.
Jager, Margot; Reijneveld, Sijmen A.; Metselaar, Janneke; Knorth, Erik J.; De Winter, Andrea F.
2014-01-01
Objective: To examine adolescents' attributed relevance and experiences regarding communication, and whether discrepancies in these are associated with clients' participation and learning processes in psychosocial care. Methods: Adolescents receiving psychosocial care (n = 211) completed measures of
Ibrahima F
2014-05-01
Full Text Available Farikou Ibrahima,1,2 Pius Fokam,2 Félicien Faustin Mouafo Tambo11Department of Surgery and Specialties, Faculty of Medicine and Biomedical Sciences, University of Yaoundé I, Yaoundé, 2Department of Surgery, Douala General Hospital, Douala, CameroonBackground: We present a case of lengthening of a tibia to treat postosteomyelitis pseudarthrosis and limb length discrepancy by the Ilizarov device.Objective: The objective was to treat the pseudarthrosis and correct the consequent limb length discrepancy of 50 mm.Materials and methods: The patient was a 5-year-old boy. Osteotomy of the tibia, excision of fibrosis, and decortications were carried out. After a latency period of 5 days, the lengthening started at a rate of 1 mm per day.Results: The pseudarthrosis healed and the gained correction was 21.73%. The index consolidation was 49 days/cm. Minor complications were reported.Discussion: Osteomyelitis of long bones is a common poverty-related disease in Africa. The disease usually is diagnosed at an advanced stage with complications. In these conditions, treatment is much more difficult. Most surgical procedures treating this condition use the Ilizarov device. The most common reported surgical complications are refractures and recurrence of infection.Conclusion: This technique should be popularized in countries with limited resources because it would be an attractive alternative to the amputations that are sometimes performed.Keywords: Limb length discrepancy (LLD, bone gap, Ilizarov device
An Analysis of the Discrepancies between MODIS and INSAT-3D LSTs in High Temperatures
Seyed Kazem Alavipanah
2017-04-01
Full Text Available In many disciplines, knowledge on the accuracy of Land Surface Temperature (LST as an input is of great importance. One of the most efficient methods in LST evaluation is cross validation. Well-documented and validated polar satellites with a high spatial resolution can be used as references for validating geostationary LST products. This study attempted to investigate the discrepancies between a Moderate Resolution Imaging Spectro-radiometer (MODIS and Indian National Satellite (INSAT-3D LSTs for high temperatures, focusing on six deserts with sand dune land cover in the Middle East from 3 March 2015 to 24 August 2016. Firstly, the variability of LSTs in the deserts of the study area was analyzed by comparing the mean, Standard Deviation (STD, skewness, minimum, and maximum criteria for each observation time. The mean value of the LST observations indicated that the MYD-D observation times are closer to those of diurnal maximum and minimum LSTs. At all times, the LST observations exhibited a negative skewness and the STD indicated higher variability during times of MOD-D. The observed maximum LSTs from MODIS collection 6 showed higher values in comparison with the last versions of LSTs for hot spot regions around the world. After the temporal, spatial, and geometrical matching of LST products, the mean of the MODIS—INSAT LST differences was calculated for the study area. The results demonstrated that discrepancies increased with temperature up to +15.5 K. The slopes of the mean differences were relatively similar for all deserts except for An Nafud, suggesting an effect of View Zenith Angle (VZA. For modeling the discrepancies between two sensors in continuous space, the Diurnal Temperature Cycles (DTC of both sensors were constructed and compared. The sample DTC models approved the results from discrete LST subtractions and proposed the uncertainties within MODIS DTCs. The authors proposed that the observed LST discrepancies in high
Discrepancies in Communication Versus Documentation of Weight-Management Benchmarks
Christy B. Turer MD, MHS
2017-02-01
Full Text Available To examine gaps in communication versus documentation of weight-management clinical practices, communication was recorded during primary care visits with 6- to 12-year-old overweight/obese Latino children. Communication/documentation content was coded by 3 reviewers using communication transcripts and health-record documentation. Discrepancies in communication/documentation content codes were resolved through consensus. Bivariate/multivariable analyses examined factors associated with discrepancies in benchmark communication/documentation. Benchmarks were neither communicated nor documented in up to 42% of visits, and communicated but not documented or documented but not communicated in up to 20% of visits. Lowest benchmark performance rates were for laboratory studies (35% and nutrition/weight-management referrals (42%. In multivariable analysis, overweight (vs obesity was associated with 1.6 more discrepancies in communication versus documentation (P = .03. Many weight-management benchmarks are not met, not documented, or performed without being communicated. Enhanced communication with families and documentation in health records may promote lifestyle changes in overweight children and higher quality care for overweight children in primary care.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Salary discrepancies between practicing male and female physician assistants.
Coplan, Bettie; Essary, Alison C; Virden, Thomas B; Cawley, James; Stoehr, James D
2012-01-01
Salary discrepancies between male and female physicians are well documented; however, gender-based salary differences among clinically practicing physician assistants (PAs) have not been studied since 1992 (Willis, 1992). Therefore, the objectives of the current study are to evaluate the presence of salary discrepancies between clinically practicing male and female PAs and to analyze the effect of gender on income and practice characteristics. Using data from the 2009 American Academy of Physician Assistants' (AAPA) Annual Census Survey, we evaluated the salaries of PAs across multiple specialties. Differences between men and women were compared for practice characteristics (specialty, experience, etc) and salary (total pay, base pay, on-call pay, etc) in orthopedic surgery, emergency medicine, and family practice. Men reported working more years as a PA in their current specialty, working more hours per month on-call, providing more direct care to patients, and more funding available from their employers for professional development (p pay, overtime pay, administrative pay, on-call pay, and incentive pay based on productivity and performance (p pay (p = .001) in orthopedic surgery, higher total income (p = .011) and base pay (p = .005) in emergency medicine, and higher base pay in family practice (p discrepancies remain between employed male and female PAs regardless of specialty, experience, or other practice characteristics. Copyright © 2012. Published by Elsevier Inc.
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Zio, Enrico
2014-01-01
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Distributed Design of a Central Service to Ensure Deterministic Behavior
Imran Ali Jokhio
2012-10-01
Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.
Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal
Turajlic, Samra; Xu, Hang; Litchfield, Kevin
2018-01-01
The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show...... that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors...... outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance....
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-01-01
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state
Deterministically entangling multiple remote quantum memories inside an optical cavity
Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun
2018-01-01
Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.
Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)
2014-12-15
Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.
A deterministic model of nettle caterpillar life cycle
Syukriyah, Y.; Nuraini, N.; Handayani, D.
2018-03-01
Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Absorbing phase transitions in deterministic fixed-energy sandpile models
Park, Su-Chan
2018-03-01
We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.
Realization of deterministic quantum teleportation with solid state qubits
Andreas Wallfraff
2014-01-01
Using modern micro and nano-fabrication techniques combined with superconducting materials we realize electronic circuits the dynamics of which are governed by the laws of quantum mechanics. Making use of the strong interaction of photons with superconducting quantum two-level systems realized in these circuits we investigate both fundamental quantum effects of light and applications in quantum information processing. In this talk I will discuss the deterministic teleportation of a quantum state in a macroscopic quantum system. Teleportation may be used for distributing entanglement between distant qubits in a quantum network and for realizing universal and fault-tolerant quantum computation. Previously, we have demonstrated the implementation of a teleportation protocol, up to the single-shot measurement step, with three superconducting qubits coupled to a single microwave resonator. Using full quantum state tomography and calculating the projection of the measured density matrix onto the basis of two qubits has allowed us to reconstruct the teleported state with an average output state fidelity of 86%. Now we have realized a new device in which four qubits are coupled pair-wise to three resonators. Making use of parametric amplifiers coupled to the output of two of the resonators we are able to perform high-fidelity single-shot read-out. This has allowed us to demonstrate teleportation by individually post-selecting on any Bell-state and by deterministically distinguishing between all four Bell states measured by the sender. In addition, we have recently implemented fast feed-forward to complete the teleportation process. In all instances, we demonstrate that the fidelity of the teleported states are above the threshold imposed by classical physics. The presented experiments are expected to contribute towards realizing quantum communication with microwave photons in the foreseeable future. (author)
Measures of thermodynamic irreversibility in deterministic and stochastic dynamics
Ford, Ian J
2015-01-01
It is generally observed that if a dynamical system is sufficiently complex, then as time progresses it will share out energy and other properties amongst its component parts to eliminate any initial imbalances, retaining only fluctuations. This is known as energy dissipation and it is closely associated with the concept of thermodynamic irreversibility, measured by the increase in entropy according to the second law. It is of interest to quantify such behaviour from a dynamical rather than a thermodynamic perspective and to this end stochastic entropy production and the time-integrated dissipation function have been introduced as analogous measures of irreversibility, principally for stochastic and deterministic dynamics, respectively. We seek to compare these measures. First we modify the dissipation function to allow it to measure irreversibility in situations where the initial probability density function (pdf) of the system is asymmetric as well as symmetric in velocity. We propose that it tests for failure of what we call the obversibility of the system, to be contrasted with reversibility, the failure of which is assessed by stochastic entropy production. We note that the essential difference between stochastic entropy production and the time-integrated modified dissipation function lies in the sequence of procedures undertaken in the associated tests of irreversibility. We argue that an assumed symmetry of the initial pdf with respect to velocity inversion (within a framework of deterministic dynamics) can be incompatible with the Past Hypothesis, according to which there should be a statistical distinction between the behaviour of certain properties of an isolated system as it evolves into the far future and the remote past. Imposing symmetry on a velocity distribution is acceptable for many applications of statistical physics, but can introduce difficulties when discussing irreversible behaviour. (paper)
Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.
2014-01-01
Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000
Yu TY
2016-02-01
Full Text Available Tzu-Ying Yu,1 Kuan-Lin Chen,2,3 Willy Chou,4,5 Shu-Han Yang,4 Sheng-Chun Kung,4 Ya-Chen Lee,2 Li-Chen Tung4,6,7 1Department of Occupational Therapy, College of Medicine, I-Shou University, Kaohsiung, 2Department of Occupational Therapy, College of Medicine, National Cheng Kung University, Tainan, 3Department of Physical Medicine and Rehabilitation, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, 4Department of Physical Medicine and Rehabilitation, Chi-Mei Medical Center, Tainan, 5Department of Recreation and Health Care Management, Cha Nan University of Pharmacy and Science, Tainan, 6School of Medicine, Kaohsiung Medical University, Kaohsiung, 7School of Medicine, Chung Shan Medical University, Taichung, Taiwan Purpose: This study aimed to establish 1 whether a group difference exists in the motor competence of preschool children at risk for developmental delays with intelligence quotient discrepancy (IQD; refers to difference between verbal intelligence quotient [VIQ] and performance intelligence quotient [PIQ] and 2 whether an association exists between IQD and motor competence.Methods: Children’s motor competence and IQD were determined with the motor subtests of the Comprehensive Developmental Inventory for Infants and Toddlers and Wechsler Preschool and Primary Scale of Intelligence™ – Fourth Edition. A total of 291 children were included in three groups: NON-IQD (n=213; IQD within 1 standard deviation [SD], VIQ>PIQ (n=39; VIQ>PIQ greater than 1 SD, and PIQ>VIQ (n=39; PIQ>VIQ greater than 1 SD.Results: The results of one-way analysis of variance indicated significant differences among the subgroups for the “Gross and fine motor” subdomains of the Comprehensive Developmental Inventory for Infants and Toddlers, especially on the subtests of “body-movement coordination” (F=3.87, P<0.05 and “visual-motor coordination” (F=6.90, P<0.05. Motor competence was significantly
Pest persistence and eradication conditions in a deterministic model for sterile insect release.
Gordillo, Luis F
2015-01-01
The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.
Cryptology transmitted message protection from deterministic chaos up to optical vortices
Izmailov, Igor; Romanov, Ilia; Smolskiy, Sergey
2016-01-01
This book presents methods to improve information security for protected communication. It combines and applies interdisciplinary scientific engineering concepts, including cryptography, chaos theory, nonlinear and singular optics, radio-electronics and self-changing artificial systems. It also introduces additional ways to improve information security using optical vortices as information carriers and self-controlled nonlinearity, with nonlinearity playing a key "evolving" role. The proposed solutions allow the universal phenomenon of deterministic chaos to be discussed in the context of information security problems on the basis of examples of both electronic and optical systems. Further, the book presents the vortex detector and communication systems and describes mathematical models of the chaos oscillator as a coder in the synchronous chaotic communication and appropriate decoders, demonstrating their efficiency both analytically and experimentally. Lastly it discusses the cryptologic features of analyze...
Deterministic assembly of linear gold nanorod chains as a platform for nanoscale applications
Rey, Antje; Billardon, Guillaume; Loertscher, Emanuel
2013-01-01
target substrate, thus establishing a platform for a variety of nanoscale electronic and optical applications ranging from molecular electronics to optical and plasmonic devices. As a first example, electrical measurements are performed on contacted gold nanorod chains before and after their immersion......We demonstrate a method to assemble gold nanorods highly deterministically into a chain formation by means of directed capillary assembly. This way we achieved straight chains consisting of end-to-end aligned gold nanorods assembled in one specific direction with well-controlled gaps of similar...... to 6 nm between the individual constituents. We determined the conditions for optimum quality and yield of nanorod chain assembly by investigating the influence of template dimensions and assembly temperature. In addition, we transferred the gold nanorod chains from the assembly template onto a Si/SiO2...
Bottom-up learning of hierarchical models in a class of deterministic POMDP environments
Itoh Hideaki
2015-09-01
Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments
A. Campanile
2018-01-01
Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.
2010-07-01
... Measurements Required, and Maximum Discrepancy Specification C Table C-1 to Subpart C of Part 53 Protection of... Reference Methods Pt. 53, Subpt. C, Table C-1 Table C-1 to Subpart C of Part 53—Test Concentration Ranges..., June 22, 2010, table C-1 to subpart C was revised, effective Aug. 23, 2010. For the convenience of the...
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Autologous bone marrow-derived stem cell therapy in heart disease: discrepancies and contradictions.
Francis, Darrel P; Mielewczik, Michael; Zargaran, David; Cole, Graham D
2013-10-09
Autologous bone marrow stem cell therapy is the greatest advance in the treatment of heart disease for a generation according to pioneering reports. In response to an unanswered letter regarding one of the largest and most promising trials, we attempted to summarise the findings from the most innovative and prolific laboratory. Amongst 48 reports from the group, there appeared to be 5 actual clinical studies ("families" of reports). Duplicate or overlapping reports were common, with contradictory experimental design, recruitment and results. Readers cannot always tell whether a study is randomised versus not, open-controlled or blinded placebo-controlled, or lacking a control group. There were conflicts in recruitment dates, criteria, sample sizes, million-fold differences in cell counts, sex reclassification, fractional numbers of patients and conflation of competitors' studies with authors' own. Contradictory results were also common. These included arithmetical miscalculations, statistical errors, suppression of significant changes, exaggerated description of own findings, possible silent patient deletions, fractional numbers of coronary arteries, identical results with contradictory sample sizes, contradictory results with identical sample sizes, misrepresented survival graphs and a patient with a negative NYHA class. We tabulate over 200 discrepancies amongst the reports. The 5 family-flagship papers (Strauer 2002, STAR, IACT, ABCD, BALANCE) have had 2665 citations. Of these, 291 citations were to the pivotal STAR or IACT-JACC papers, but 97% of their eligible citing papers did not mention any discrepancies. Five meta-analyses or systematic reviews covered these studies, but none described any discrepancies and all resolved uncertainties by undisclosed methods, in mutually contradictory ways. Meta-analysts disagreed whether some studies were randomised or "accepter-versus-rejecter". Our experience of presenting the discrepancies to journals is that readers may
An ABO blood grouping discrepancy: Probable B(A) phenotype.
Jain, Ashish; Gupta, Anubhav; Malhotra, Sheetal; Marwaha, Neelam; Sharma, Ratti Ram
2017-06-01
In B(A) phenotype, an autosomal dominant phenotype, there is a weak A expression on group B RBCs. We herein report a case of a probable B(A) phenotype in a first time 20-year old male donor. The cell and serum grouping were done using tube technique and also with blood grouping gel card (Diaclone, ABD cards for donors, BioRad, Switzerland). The antisera used were commercial monoclonal IgM type. To check for the weak subgroup of A, cold adsorption and heat elution was performed. The cell grouping was A weak B RhD positive while the serum grouping was B. There was no agglutination with O cells and the autologous control was also negative. It was a group II ABO discrepancy with or without group IV discrepancy. Results for both the eluate and last wash were negative. Hence, the possibility of weak subgroup of A was unlikely. Blood grouping gel card also showed a negative reaction in the anti-A column. One lot of anti-A was showing 'weak +' agglutination while the other lot was showing 'negative' reaction with the donor RBCs by tube technique. There was no agglutination observed with anti-A1 lectin. Our case highlights the serological characteristics of a B(A) phenotype. This case emphasizes the vital role of cell and serum grouping in detecting such discrepancies especially in donors which can lead to mislabeling of the blood unit and may be a potential risk for the transfusion recipient if not resolved appropriately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Aspects of cell calculations in deterministic reactor core analysis
Varvayanni, M.; Savva, P.; Catsaros, N.
2011-01-01
Τhe capability of achieving optimum utilization of the deterministic neutronic codes is very important, since, although elaborate tools, they are still widely used for nuclear reactor core analyses, due to specific advantages that they present compared to Monte Carlo codes. The user of a deterministic neutronic code system has to make some significant physical assumptions if correct results are to be obtained. A decisive first step at which such assumptions are required is the one-dimensional cell calculations, which provide the neutronic properties of the homogenized core cells and collapse the cross sections into user-defined energy groups. One of the most crucial determinations required at the above stage and significantly influencing the subsequent three-dimensional calculations of reactivity, concerns the transverse leakages, associated to each one-dimensional, user-defined core cell. For the appropriate definition of the transverse leakages several parameters concerning the core configuration must be taken into account. Moreover, the suitability of the assumptions made for the transverse cell leakages, depends on earlier user decisions, such as those made for the core partition into homogeneous cells. In the present work, the sensitivity of the calculated core reactivity to the determined leakages of the individual cells constituting the core, is studied. Moreover, appropriate assumptions concerning the transverse leakages in the one-dimensional cell calculations are searched out. The study is performed examining also the influence of the core size and the reflector existence, while the effect of the decisions made for the core partition into homogenous cells is investigated. In addition, the effect of broadened moderator channels formed within the core (e.g. by removing fuel plates to create space for control rod hosting) is also examined. Since the study required a large number of conceptual core configurations, experimental data could not be available for
Schönhense, G., E-mail: schoenhense@uni-mainz.de [Institut für Physik, Johannes Gutenberg-Universität, 55128 Mainz (Germany); Medjanik, K. [Institut für Physik, Johannes Gutenberg-Universität, 55128 Mainz (Germany); Tusche, C. [Max-Planck-Institut für Mikrostrukturphysik, 06120 Halle (Germany); Loos, M. de; Geer, B. van der [Pulsar Physics, Burghstraat 47, 5614 BC Eindhoven (Netherlands); Scholz, M.; Hieke, F.; Gerken, N. [Physics Department and Center for Free-Electron Laser Science, Univ. Hamburg, 22761 Hamburg (Germany); Kirschner, J. [Max-Planck-Institut für Mikrostrukturphysik, 06120 Halle (Germany); Wurth, W. [Physics Department and Center for Free-Electron Laser Science, Univ. Hamburg, 22761 Hamburg (Germany); DESY Photon Science, 22607 Hamburg (Germany)
2015-12-15
Ultrahigh spectral brightness femtosecond XUV and X-ray sources like free electron lasers (FEL) and table-top high harmonics sources (HHG) offer fascinating experimental possibilities for analysis of transient states and ultrafast electron dynamics. For electron spectroscopy experiments using illumination from such sources, the ultrashort high-charge electron bunches experience strong space–charge interactions. The Coulomb interactions between emitted electrons results in large energy shifts and severe broadening of photoemission signals. We propose a method for a substantial reduction of the effect by exploiting the deterministic nature of space–charge interaction. The interaction of a given electron with the average charge density of all surrounding electrons leads to a rotation of the electron distribution in 6D phase space. Momentum microscopy gives direct access to the three momentum coordinates, opening a path for a correction of an essential part of space–charge interaction. In a first experiment with a time-of-flight momentum microscope using synchrotron radiation at BESSY, the rotation in phase space became directly visible. In a separate experiment conducted at FLASH (DESY), the energy shift and broadening of the photoemission signals were quantified. Finally, simulations of a realistic photoemission experiment including space–charge interaction reveals that a gain of an order of magnitude in resolution is possible using the correction technique presented here. - Highlights: • Photoemission spectromicroscopy with high-brightness pulsed sources is examined. • Deterministic interaction of an electron with the average charge density can be corrected. • Requires a cathode-lens type microscope optimized for best k-resolution in reciprocal plane. • Extractor field effectively separates pencil beam of secondary electrons from true signal. • Simulations reveal one order of magnitude gain in resolution.
Comparative analysis of deterministic and probabilistic fracture mechanical assessment tools
Heckmann, Klaus [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany); Saifi, Qais [VTT Technical Research Centre of Finland, Espoo (Finland)
2016-11-15
Uncertainties in material properties, manufacturing processes, loading conditions and damage mechanisms complicate the quantification of structural reliability. Probabilistic structure mechanical computing codes serve as tools for assessing leak- and break probabilities of nuclear piping components. Probabilistic fracture mechanical tools were compared in different benchmark activities, usually revealing minor, but systematic discrepancies between results of different codes. In this joint paper, probabilistic fracture mechanical codes are compared. Crack initiation, crack growth and the influence of in-service inspections are analyzed. Example cases for stress corrosion cracking and fatigue in LWR conditions are analyzed. The evolution of annual failure probabilities during simulated operation time is investigated, in order to identify the reasons for differences in the results of different codes. The comparison of the tools is used for further improvements of the codes applied by the partners.
A deterministic seismic hazard map of India and adjacent areas
Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano
2001-09-01
A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
Choi, Sun Mi; Kim, Ji Hwan; Seok, Ho
2016-01-01
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Conversion of dependability deterministic requirements into probabilistic requirements
Bourgade, E.; Le, P.
1993-02-01
This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
Deterministic network interdiction optimization via an evolutionary approach
Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem
Is there a sharp phase transition for deterministic cellular automata?
Wootters, W.K.
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs
Deterministic and Probabilistic Analysis against Anticipated Transient Without Scram
Choi, Sun Mi; Kim, Ji Hwan [KHNP Central Research Institute, Daejeon (Korea, Republic of); Seok, Ho [KEPCO Engineering and Construction, Daejeon (Korea, Republic of)
2016-10-15
An Anticipated Transient Without Scram (ATWS) is an Anticipated Operational Occurrences (AOOs) accompanied by a failure of the reactor trip when required. By a suitable combination of inherent characteristics and diverse systems, the reactor design needs to reduce the probability of the ATWS and to limit any Core Damage and prevent loss of integrity of the reactor coolant pressure boundary if it happens. This study focuses on the deterministic analysis for the ATWS events with respect to Reactor Coolant System (RCS) over-pressure and fuel integrity for the EU-APR. Additionally, this report presents the Probabilistic Safety Assessment (PSA) reflecting those diverse systems. The analysis performed for the ATWS event indicates that the NSSS could be reached to controlled and safe state due to the addition of boron into the core via the EBS pump flow upon the EBAS by DPS. Decay heat is removed through MSADVs and the auxiliary feedwater. During the ATWS event, RCS pressure boundary is maintained by the operation of primary and secondary safety valves. Consequently, the acceptance criteria were satisfied by installing DPS and EBS in addition to the inherent safety characteristics.
Rapid detection of small oscillation faults via deterministic learning.
Wang, Cong; Chen, Tianrui
2011-08-01
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
ON range searching in the group model and combinatorial discrepancy
Larsen, Kasper Green
2014-01-01
In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_u t_q=\\Omega(\\mbox{disc}^2......)$, where $t_u$ is the worst case update time, $t_q$ is the worst case query time, and disc is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching...... problems. We list a few of them in the following: (1) For $d$-dimensional halfspace range searching, we get a lower bound of $t_u t_q=\\Omega(n^{1-1/d})$. This comes within an lg lg $n$ factor of the best known upper bound. (2) For orthogonal range searching, we get a lower bound of $t_u t...
Discrepancy between body surface area and body composition in cancer.
Stobäus, Nicole; Küpferling, Susanne; Lorenz, Marie-Luise; Norman, Kristina
2013-01-01
Calculation of cytostatic dose is typically based on body surface area (BSA) regardless of body composition. The aim of this study was to assess the discrepancy between BSA and low fat-free mass (FFM) by investigating the prevalence of low FFM with regard to BSA in 630 cancer patients. First, BSA was calculated according to DuBois and DuBois. Patients were divided into 6 categories with respect to their BSA. Each BSA category was further divided into 3 groups according to FFM: low (FFM), normal (-0.99 and 0.99 SD of mean FFM) or high (>1 SD of mean FFM), which was derived through bioelectric impedance analysis. FFM was reduced in 15.7% of patients, 69% had normal and 15.2% had high FFM. In patients with low FFM (i.e., more than-1 SD lower than the mean FFM within their BSA group), body mass index and fatigue were higher whereas functional status was reduced. Moreover, in the subcohort of patients receiving chemotherapy, absolute FFM [Hazard ratio (HR) = 0.970, P = 0.026] as well as the allocation to the low FFM group (HR = 1.644, P = 0.025) emerged as predictors of increased 1-yr mortality. In conclusion, there was a large discrepancy between FFM and BSA. Particularly women were affected by low FFM.
A probable stellar solution to the cosmological lithium discrepancy.
Korn, A J; Grundahl, F; Richard, O; Barklem, P S; Mashonkina, L; Collet, R; Piskunov, N; Gustafsson, B
2006-08-10
The measurement of the cosmic microwave background has strongly constrained the cosmological parameters of the Universe. When the measured density of baryons (ordinary matter) is combined with standard Big Bang nucleosynthesis calculations, the amounts of hydrogen, helium and lithium produced shortly after the Big Bang can be predicted with unprecedented precision. The predicted primordial lithium abundance is a factor of two to three higher than the value measured in the atmospheres of old stars. With estimated errors of 10 to 25%, this cosmological lithium discrepancy seriously challenges our understanding of stellar physics, Big Bang nucleosynthesis or both. Certain modifications to nucleosynthesis have been proposed, but found experimentally not to be viable. Diffusion theory, however, predicts atmospheric abundances of stars to vary with time, which offers a possible explanation of the discrepancy. Here we report spectroscopic observations of stars in the metal-poor globular cluster NGC 6397 that reveal trends of atmospheric abundance with evolutionary stage for various elements. These element-specific trends are reproduced by stellar-evolution models with diffusion and turbulent mixing. We thus conclude that diffusion is predominantly responsible for the low apparent stellar lithium abundance in the atmospheres of old stars by transporting the lithium deep into the star.
The attitude-behavior discrepancy in medical decision making.
He, Fei; Li, Dongdong; Cao, Rong; Zeng, Juli; Guan, Hao
2014-12-01
In medical practice, the dissatisfaction of patients about medical decisions made by doctors is often regarded as the fuse of doctor-patient conflict. However, a few studies have looked at why there are such dissatisfactions. This experimental study aimed to explore the discrepancy between attitude and behavior within medical situations and its interaction with framing description. A total of 450 clinical undergraduates were randomly assigned to six groups and investigated using the classic medical decision making problem, which was described either in a positive or a negative frame (2) × decision making behavior\\attitude to risky plan\\attitude to conservative plan (3). A discrepancy between attitude and behavior did exist in medical situations. Regarding medical dilemmas, if the mortality rate was described, subjects had a significant tendency to choose a conservative plan (t = 3.55, P 0.05). However, regardless of the plan chosen by the doctor, the subjects had a significant opposing attitude (P Framing description had a significant impact on both decision making behavior and attitude (t behavior = -3.24, P framing of a description has an impact on medical decision-making.
Kay, Daniel B.; Buysse, Daniel J.; Germain, Anne; Hall, Martica; Monk, Timothy H.
2014-01-01
Discrepancy between subjective and objective measures of sleep is associated with insomnia and increasing age. Cognitive behavioral therapy for insomnia improves sleep quality and decreases subjective-objective sleep discrepancy. This study describes differences between older adults with insomnia and controls in sleep discrepancy, and tests the hypothesis that reduced sleep discrepancy following cognitive behavioral therapy for insomnia correlates with the magnitude of symptom improvement rep...
Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry
Tagziria, H
2000-02-01
Modelling a physical system can be carried out either stochastically or deterministically. An example of the former method is the Monte Carlo technique, in which statistically approximate methods are applied to exact models. No transport equation is solved as individual particles are simulated and some specific aspect (tally) of their average behaviour is recorded. The average behaviour of the physical system is then inferred using the central limit theorem. In contrast, deterministic codes use mathematically exact methods that are applied to approximate models to solve the transport equation for the average particle behaviour. The physical system is subdivided in boxes in the phase-space system and particles are followed from one box to the next. The smaller the boxes the better the approximations become. Although the Monte Carlo method has been used for centuries, its more recent manifestation has really emerged from the Manhattan project of the Word War II. Its invention is thought to be mainly due to Metropolis, Ulah (through his interest in poker), Fermi, von Neuman andRichtmeyer. Over the last 20 years or so, the Monte Carlo technique has become a powerful tool in radiation transport. This is due to users taking full advantage of richer cross section data, more powerful computers and Monte Carlo techniques for radiation transport, with high quality physics and better known source spectra. This method is a common sense approach to radiation transport and its success and popularity is quite often also due to necessity, because measurements are not always possible or affordable. In the Monte Carlo method, which is inherently realistic because nature is statistical, a more detailed physics is made possible by isolation of events while rather elaborate geometries can be modelled. Provided that the physics is correct, a simulation is exactly analogous to an experimenter counting particles. In contrast to the deterministic approach, however, a disadvantage of the
Children's Recall and Recognition of Sex Role Stereotyped and Discrepant Information.
Trepanier-Street, Mary L.; Kropp, Jerri Jaudon
1987-01-01
Investigated the influence of differing levels of sex role stereotyped and discrepant information on immediate and delayed memory. Compared kindergarten and second-grade children's recall and recognition of stereotyped, moderately discrepant, and highly discrepant pictures. Results suggested significantly better recall of highly discrepant…
A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs
Jooyong Yi
2013-09-01
Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.
Deterministic one-way simulation of two-way, real-time cellular automata and its related problems
Umeo, H; Morita, K; Sugata, K
1982-06-13
The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
Anti-deterministic behaviour of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.
2005-05-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua
2010-01-01
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com
2010-04-15
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Baril, L.; Carmel, R.
1978-01-01
Folate assays by use of radiolabeled folate provide obvious practical advantages over the standard microbiological assay, but remain incompletely tested. We therefore compared results for 415 sera with a kit involving 3 H-labeled folate and the Lactobacillus casei microbiological method. We examined the patients' data when there were discrepancies between the two methods. Although the correlation overall was satisfactory, results were discrepant in 25% of cases. In 74% of the latter, the radioassay result appeared to be the correct one, primarily because L. casei results were suppressed by antibiotics being taken by the patient. The radioassay occasionally gave falsely high values for patients with liver disease and falsely low ones for patients who had received isotopes for scanning purposes. Several assay kits that make use of 125 I- or 75 Se-labeled folate were also tested. Although these results correlated with the results of 3 H-labeled folate assay, various problems appeared, including the possible need for serum-supernate control tubes in one kit. Answers to these and other questions and careful clinical correlation of results are needed for any folate radioassays before their adoption for routine clinical use
Carter Ashley JR
2012-07-01
Full Text Available Abstract Background Ideally, the distribution of research funding for different types of cancer should be equitable with respect to the societal burden each type of cancer imposes. These burdens can be estimated in a variety of ways; “Years of Life Lost” (YLL measures the severity of death in regard to the age it occurs, "Disability-Adjusted Life-Years" (DALY estimates the effects of non-lethal disabilities incurred by disease and economic metrics focus on the losses to tax revenue, productivity or direct medical expenses. We compared research funding from the National Cancer Institute (NCI to a variety of burden metrics for the most common types of cancer to identify mismatches between spending and societal burden. Methods Research funding levels were obtained from the NCI website and information for societal health and economic burdens were collected from government databases and published reports. We calculated the funding levels per unit burden for a wide range of different cancers and burden metrics and compared these values to identify discrepancies. Results Our analysis reveals a considerable mismatch between funding levels and burden. Some cancers are funded at levels far higher than their relative burden suggests (breast cancer, prostate cancer, and leukemia while other cancers appear underfunded (bladder, esophageal, liver, oral, pancreatic, stomach, and uterine cancers. Conclusions These discrepancies indicate that an improved method of health care research funding allocation should be investigated to better match funding levels to societal burden.
Management of bimaxillary transverse discrepancy with vertical excess
Dinesh C Chaudhary
2015-01-01
Full Text Available A 14-year-old boy reported with a complaint of severe irregularity of lower teeth and forwardly placed upper teeth. History revealed snoring as an occasional complaint. The case was diagnosed as mild class II skeletally with increased lower anterior face height, bimaxillary transverse discrepancy leading to severe crowding in the lower arch, V-shaped upper arch with increased overjet and deep bite. Three phase treatment was planned. In the first phase, bimaxillary expansion with mid-symphyseal distraction osteogenesis and rapid maxillary expansion was carried out. After this phase of treatment, the episodes of snoring vanished. The second phase was 1 year of orthodontics to produce symmetric well-aligned arches in good function and aesthetics. Third, the treatment concluded with reduction-advancement genioplasty for correction of vertical excess and surgical camouflage.
Discrepancies between parents' and children's attitudes toward TV advertising.
Baiocco, Roberto; D'Alessio, Maria; Laghi, Fiorenzo
2009-06-01
The authors conducted a study with 500 parent-child dyads. The sample comprised 254 boys and 246 girls. The children were grouped into 5 age groups (1 group for each age from 7 to 11 years), with each group comprising 100 children. The survey regards discrepancies between children and their parents on attitudes toward TV advertising to determine how TV commercials affect children's developmental stages and, particularly, their credence, behavioral intentions, and TV enjoyment. Regarding enjoyment and purchase dimensions, the group of 7-year-old children claimed that they enjoyed and are influenced in their consumer attitude more than did the groups of 8-11-year-old children. Credence decreased significantly with age. This study showed that parents tended to undervalue TV advertising's influence on their children. Parents' conformity was a significant predictor of children's attitude toward TV advertising. Results indicated that a high level of parental conformity was linked to the number of brands children claimed to possess.
Some Properties of a Measure of Information Discrepancy
FANG Shun-lan; FANG Wei-wu
2002-01-01
Based on a group of axioms, a measure of information discrepancyamong multiple information sources has been introduced in [7,8, 10] and it possesses some peculiar properties compared with other measures of information discrepancy, so it can be used in some areas, where the traditional measures are not valid or not efficient, for example, in the study of DNA sequence comparison, prediction of protein structure class, evidence analysis, questionnaire analysis, and so on. In this paper, using the optimization techniques,we prove that it is a distance function and show that it is also an approximation of x2 function. These two properties will stimulate further applications of the measure to information processing and system analysis.
OSTEOPATHIC APPROACH: LEG LENGTH DISCREPANCY AND LOW BACK PAIN
Taner AYDIN
2015-12-01
Full Text Available Leg length discrepancy (LLD is a biomechanical impediment, which is a potential factor in affecting musculoskeletal disorders in the rest of life, such as scoliosis, osteoarthritis and muscle tightness, or even tenderness in lumbar and pelvic area. Athletes who have developed LLD have symptoms in gait, running, standing posture. Skeletal regions related to the disorder are the lumbar spine, ilium, hip joint, greater trochanter and knee, or even ankle and plantar region. Muscles involved in these areas are numeous. In osteopathic management, the manual practitioner can use a lot of basic techniques to handle these dysfunctions. To cope with musculoskeletal problems, osteopathic manipulation techniques would be an ideal modality to alleviate the LLD syndrome. An overview of the mentioned topics of concern will be discussed in the review.
Linking Cognition to Cognitive Dissonance through Scientific Discrepant Events
Allen G. Rauch
2010-10-01
Full Text Available The aim of this workshop and paper is to provide a conceptual framework that will develop skills in the areas of observation, cognition/meta-cognition with emphasis on critical thinking, decision making and problem solving. Simultaneously, this endeavour is designed to stimulate one‟s curiosity and thereby provide motivation to learn. These are accomplished through the learning style methodology with emphasis on interactive instructional resources addressing a multi-modality approach to teaching and learning. It will be shown that discrepant events impact thinking with respect to problem solving. The aforementioned is demonstrated with the use of gravity, molecular structure and optical illusions. The workshop presenters will show how cognitive dissonance, precipitated within each of these constituents, fosters curiosity and therefore provides an ideal motivational component for exploration.
Radiological social risk perception: something more than experts/ public discrepancies
Prades Lopez, Ana; Gonzalez Reyes, Felisa
1998-01-01
One of the most important concerns of the postindustrial societies lies on the specification and quantification of risk, the Risk Assesment. However, the efforts and resources devoted to such goal have not avoided a growing worry about both the environmental conditions and the situations that potentially threaten it, generating an intense social debate about risks. In this framework, discrepancies between experts and public evaluations risks leaded to the study of social Risk perception. Several theoretical scopes have tried to characterize the phenomenon. A worthy conclusion of the empirical studies carried out on this issue is that all of them, experts and public, are influence by some factors which, in turns, affect their risk perception,. Specially striking is the fact that perception of risk among experts is also modulated by qualitative, personal and social factors. Social Risk Perception, through the process of Communication and Social Participation, has been configurated as a critical tool for both risk prevention and management
Phase change in uranium: Discrepancy between experiment and theory
Akella, J.
1996-01-01
Using a diamond-anvil cell (DAC) phase transformation and room temperature Equation of State (EOS) for some actinides and lanthanides were studied to multimegabar (megabar = 100 GPa) pressures. Experimental data are compared with the theoretically predicted crystal structural changes and the pressure-volume relationships. There is a general agreement between theory and experiment for the structural changes in the lighter actinides, however in detail there are some discrepancies still. A generalized trend for the phase transformations in the lanthanides can be seen, which again has broad agreement with theory. We conclude that an accurate and robust theoretical base for predicting the phase transformations in the f-electron metals can be developed by incorporating the DAC data
[Conservative treatment of upper anterior dental discrepancy during orthodontic therapy].
Messina, G; Verzì, P; Pappalardo, S
1999-06-01
The orthodontic therapeutic sequence used in cases with dento-dental discrepancy for reduced mesiodistal size and congenital absent lateral upper incisors, is described. The importance of correct conoid tooth replacement in the programmed space between the other teeth and its restorative treatment in order to obtain the best biomechanical control is stressed. The contemporaneous presence of the form and volume anomaly of the 12th and the missing 22nd due to agenesia demanded an interdisciplinary approach. For the restoration of the conoid tooth the authors used a microhybrid composite for the alloy properties with the grain size of the inorganic particles. In fact this type of composite responds well to mechanical stress and has a high shining capacity and good aesthetical rendering. Meanwhile the temporary dental prothesis solution of the 22nd in this case has suggested the application of the artificial element on the superior Hawley holding plaque.